The Ultimate Future Infrastructure: Do We Want It?

What exactly might a next-generation network look like?  If we actually try to answer that question it becomes clear that there are two parallel visions to contend with.  On the physical side, we have to build the network with fiber transport and something electrical above it, whether it’s a set of special-function devices or hosted software.  On the “logical” side, it’s probably an IP or Ethernet network—something narrow like a VPN or VLAN or broad like the Internet.  The fact that these sides exist may be troubling in a sense, promising more complications, but those sides are the essence of what “next-generation” is really about, and they may dictate or be dictated by how we get there.

The simplest approach to our network dualism was promised years ago by Nicira.  You build an overlay network, something that has its own independent nodes and addressing and forwarding, that uses arbitrary physical-network underlayment as the transport framework only.  Connectivity is never managed there, and so every piece of transport infrastructure can be optimized for the mission represented by its own place and time in the picture.  What’s best for here and now.

This is a good way to start NGN discussions because it shows the contrast that virtualization can bring.  If everything is virtual then host stuff is just plumbing and all the service value migrates upward into the virtualization layer.  This is a sharp contrast with networking today, where services are properties of the infrastructure deployed.  That tight binding between services and infrastructure is what makes network investment look like sunk cost any time a major shift in the market happens.  We’re not at the point of full virtualization yet, in no small part because the technology shifts on the horizon have limited goals and don’t take us to the future except in combination.  But if the future is where we’re going then we should consider it now to be sure we’re heading the right direction with short-term steps.

In our virtual-service world, the primary truth we have to address is how hosted/virtual nodal elements can handle the traffic of the virtual networks they support.  I can build a VPN or VLAN by using independent software routers/switch instances hosted on standard servers, and in fact build it in a number of ways, but only if I’m sure that the traffic load can be managed.  This suggests that we could use overlay virtualization for services other than the Internet but probably not for the Internet itself, unless we could revisualize how “the Internet” works.

There are two primary models for a “connection network” of any sort.  One model says that you have a tunnel mesh of endpoints, meaning that everything has a direct pipe to everything else.  While this doesn’t scale to Internet size, works fine for private LANs and VPNs.  The other model says that you can look at the traffic topology of the endpoints (what gets exchanged with who) and then place nodes at critical points to create aggregate trunks.  This mechanism lets you reduce the number of tunnels you have to build to get full connectivity.  It’s how L2/L3 networks tend to be built today, in the “real” as opposed to “virtual” age.

Would Internet-scale traffic kill this model?  It depends on how many virtual pipes you’re prepared to tolerate.  If I can host router instances anywhere I like, I could build a multi-layer aggregation network from the user edge inward, to the point where I’d aggregated as much traffic as a hosted instance could carry.  At that point I’d have to hop on a set of tunnels that would link me to all the other instances at the same level so no deeper level of aggregation/routing was required and no software instances had to handle more traffic than my preset maximum.

We could build something like this.  Not with physical routes because they’d be inefficient, but if we had virtual pipes that had effectively no overhead we could fan out as many such pipes as we liked.  This is an important point in NGN design because it says that we’d probably need white-box-SDN tunnel switching above optical-layer transport to groom out enough virtual pipes to keep the traffic per router instance down to a manageable level.  We’d then push the switching/routing closer to the edge and mesh the instances we place there.

This also suggests a new model for the tunnel-core we’ve now created.  At the edge of that element we find something that might be considered the core network of what used to be called an “NBMA” or “non-broadcast multi-access” network.  This concept was developed in the old days of ATM and frame relay—switched virtual circuits.  The goal was to let a switched pathway to augment an edge route by creating a passage across a multi-access core whose “endpoints” might not all have enough traffic with each other to justify a tunnel.

Suppose we used NBMA concepts with our tunnel-core?  We could create the tunnels between our inner-edge aggregating instances when traffic came along, and then leave them in place forever or until they aged out based on some policy.  A directory would let each of these aggregation edge instances find their partners if they had traffic to route.  If we presumed these aggregation edge elements were white-box SDN switches modified as needed, or with a controller intelligence to provide NBMA capability, we could extend aggregation deeper than with hosted router instances alone.

This is the model that seems to have the most promise for an NGN because it’s a model that could scale from private networks to public Internet.  It also demonstrates the role for various new-age network technologies.  SDN is the tunnel-manager, NFV is the instance-deployer.  We could built the network of the future with our two key technologies using this model.  Will we?  That could be more difficult.

The problem with this sort of far-out network planning is that it presumes the acceptance of a future architecture that’s accepted by operators and supported by vendors, when in truth we don’t have either of those things in place.  The fact is that the greenest of all the network fields, the time at which a transition from TDM was essential and operators accepted that, has passed.  We have exited the network revolution and entered the evolution.  Evolution, as we know, isn’t always the most logical driver of change.  We may take optimum steps that don’t add up to an optimum result.

I’m not sure how we could get back to a revolutionary mindset.  We might evolve into some backwater position so unwieldy that mass change looked good by comparison, or some new player might step in and do the right thing.  Even one operator, one vendor of substance, could change our fortunes.

Do we want that?  That’s the last question.  What would this future network do for us?  How much cheaper could it be?  Would its greater agility to generate totally new services be valuable in a market that seems to have hunkered down on Ethernet and IP?  Those are difficult questions to answer, and that’s too bad, because whether we think we’re approaching the future in a single bound or in little baby steps, it’s what’s different about it that makes it exciting.