The Future Model of the Future Network: Harnessing the Hidden Layer

We don’t build networks like we used to.  That fundamental fact should illustrate why it’s important to look at how we should build them, given the changes in both technology and demand that have driven networking in a different direction than it followed in even the recent path.  The “right” answer to any network question is dictated by the relationship between capabilities and requirements, and that relationship is changing very quickly.  To understand that we have to go back to look at what was true, then forward to look at what is, and will be.

Harken back fifty years, to the dawn of packet networking.  We had networks then, to be sure, but they were telephone networks that had been pressed into service to carry data.  It was an imperfect marriage, but at the time that didn’t matter much because data demand was very limited.  People called each other on the phone, and that’s what “communications networks” were all about.

Telephone networks have two important features, created by both human behavior and the pricing policies of the time.  One is “locality”.  Calls tended to be substitutes for physical meetings among people who did meet physically at times, but not all the time.  Long-distance calling was relatively rare because of that substitution factor, but also because long-distance was expensive.  Thus, most traffic went from the copper loop access to the central office, and then back to another “local” loop.  The other is “connectivity”; the purpose of the network was to connect people, playing no real role in their relationship beyond that connection.

When the worldwide web came along, it generated a real consumer data application.  We all know now that the sudden rush to get online transformed the mission of networks from handling calls to handling IP packets, but there were other important transformations that were missed.

The biggest impact was that the Internet and the web are about experiences, not connections.  Users go online to get things, such as retail access or content.  The ability to connect through the Internet exists, but it’s not the experience that generates most of the traffic.  The “connectivity” property is devalued, in short.

And so is “locality”.  Your average web server or content portal isn’t in your neighborhood.  Your traffic is far less likely to terminate on the same facility that it originates on, and that means that the “access network” is really not a part of the data or service network, the Internet.  It’s not responsible for connecting to a bunch of different places, just for getting you onto the public data network that can make those connections.

Mobile networks, when they came along, created another mandate for the access network, which was to accommodate the mobility of the users of the network.  This mobility was a property of the devices and users, not a new destination, and so it again created a mission for access—and now metro—networks that had nothing to do with where the traffic was eventually going, which was still to that public data network.

Now let’s look at the technology side.  From the time of digital voice, we’ve had services that, on a per-user per-instance basis, used less bandwidth than the trunks could carry.  Voice calls used 64 kbps channel capacity, and we had T1 (in the US) and T3 that were 24 and over 600 times that.  Logically, if you build a communications network, you want to avoid trenching trunk lines that are under-utilized, so we developed aggregation models to combine traffic onto trunks.  That aggregation originally took place in the central offices were loops terminated, but it migrated outward to fiber remotes, cell towers, and so forth.

Even the advent of packet networks and the Internet hasn’t eliminated aggregation.  We have “core” and “edge” routers as proof of that.  At the same time, we’ve had different aggregation technologies in the access network, including (gasp!) ATM!  The point is that access and core, or access and service, were beginning to diverge in both mission and technology.

When is aggregation a good idea?  Answer: when the unit cost of transport on an aggregated trunk is much lower than it would be on a per-user or per-destination basis.  If there’s enough traffic directly from point A to point B that the transport economy of scale isn’t much worse than would be produced by further aggregation, then run the direct trunk.  And fiber economies are changing with new technologies like agile optics and even DWDM.

One consequence of all of this is that networks divide into two zones, one for access and one for service.  In the access zone, everything aggregates toward the service-zone (public data network) gateway point.  In the service zone, the goal is to give all those gateway points as direct a path to primary resources (those that generate the most delivery bandwidth) as possible.  The presumption, though, is that service resources are distributed through the service network, not placed in a nice convenient central point.  Any of the service network gateway points thus has to be able to get to any service resource, while access network on-ramps need only connect with the service gateway points.

All of the pathways between service gateway points and service resources in the service network are likely to carry enough traffic to justify a direct trunk, meaning at least that it probably wouldn’t make sense to pass the traffic through a transit router.  That makes each path a candidate for either agile optics or a low-level groomed electrical “tunnel”.  Networks would then look like a mesh of low-layer connections between what were essentially large access routers.  However, “large” here wouldn’t be large relative to today’s core routers; those big transit boxes would be devalued by the mesh of opto-electrical paths below.

The “tunnel model” could also be called the “mesh model” because the goal would be to establish Level 1 or 2 tunnels between every element in the network, not the users but the edge points or resources.  If you are “of” the network, you get connected to the other “of’s”.  There could be exceptions to this, for example, to eliminate the need to connect content resources to each other, and there could be a default traditional multi-transit-router path to get low-level traffic around.  The mainstream stuff, though, would bypass the traditional transit routing and go edge to edge.  The impact of this on optics and routing are pretty obvious, but we should look at each of the main ones anyway.

First and foremost, we’d see a tapping off of features and dollars from the router layer, to be subducted downward into that Level 1 or 2 tunnel or “opto-electric” agile-grooming layer.  This layer would to the steering between edge points on the network, and would also supply fault tolerance and scalability features.  This approach could benefit optical network vendors, as I said in a prior blog, but those vendors have had the opportunity to push this model for at least a decade and have done nothing.  Probably, that will continue.

Second, we’d see a transformation of routers.  Instead of big “core” routers that perform transport missions, we’d have “on-ramps” to get users or resources onto an access network, and “gateways” to get between an access network and a service network, or to connect mass resources directly onto a service network.  Transit routers typically need any-to-any connectivity, but on-ramp and gateway routers really just pass a bunch of input ports to a small number of output ports.  Remember, every virtual pipe in our opto-electrical mesh isn’t a different interface; the traffic is simply groomed or aggregated at Level 1 or 2.

The third thing we could expect is a simplification of the process of routing.  What’s the best route in a resilient mesh?  There isn’t any, because every destination has its own unique tunnel so logically there’s only one path and no need to select.  Think of all the adaptive behavior that goes away.  In addition, the opto-electrical mesh is actually protocol-independent, so we could almost treat it as an SDN and say we’re “emulating” IP features at the edge.  The tunnels might actually be SDN, of course, or anything else, or any mixture of things.  They’re featureless at the IP level.

That leads to the next thing, which is the simplification of device software.  There’s a lot less work to do, there are different kinds of work (the “emulation” of IP features) to do, and there’s every reason to think that different forwarding models might be overlaid on the same tunnel structure.  VPNs, the Internet, VLANs, or whatever you like, could be supported on the same opto-electrical core.  In my view, in fact, the presumption would be that the higher-layer devices, the “routers-in-name-only”, would be IP control and management emulators combined with agile forwarding capabilities, like those P4 would support.

And then there’s the final thing, which is consolidation of functions into a smaller number of devices.  We have an explicit optical layer and routing layer in networks today, and we’re looking at the Level 1 and 2 “grooming” or “tunnel” layer.  We can’t end up with three layers of devices, because that would destroy any capital or operational economy.  We have to end up with less.  The solution to this new network model, in device terms, has to cut costs somewhere, in some way.  We can eliminate layers, consolidate electrical and optical.  We can build out optical and dumb down electrical, with simplified routing and open-model devices and software.  We have to do one, or both.  Things can’t go on as they have been going, because that’s lowering operator profit per bit and threatening network expansion.

All of this is driven by the assumption that profit per bit has to be improved by improving cost per bit.  Operators have face declining revenue per bit for a decade or more, and the most obvious counter to this is to reduce cost per bit so profit per bit stabilizes.  We’re seeing lots of different steps to address cost reduction, but what I’m calling the “new network” is one of the most obvious but also most profound.  If networks are fundamentally different today, both because of the demands placed on them and because of the technology that builds them, then why should today’s networks look, topologically speaking, like models of the past?

Inertia, of course, but whose inertia?  Operators blame vendors for not responding to their profit issues, and instead supporting their own profits.  If this is true (which at least in part it is) then vendors are doomed to eventual disappointment.  You can’t expect your customers run at a loss so you can earn improved profits.  The new network model I’m describing may not be a policy of operators, a united vision that spans the globe, but it’s still a reality because what drives it is a reality.

Operators are at a critical juncture.  Do they believe, really believe, their vendors are scamming them?  If so, then an open model for devices and networks is the answer.  Some operators are already embracing that.  Others may well follow.  Tomorrow we’ll look at the way the leading operator in open transformation is proceeding, and how their approach might (or might not) sweep the market.