Why Not Have REAL Virtualization?

Does a network that presents an IP interface to the user constitute an “IP network?”  Is that condition enough to define a network, or are there other properties?  These aren’t questions that we’re used to asking, but in the era of virtualization and intent modeling, there are powerful reasons to ask whether a “network” is defined by its services alone.  One powerful reason is that sometimes we could create user services in a better way than we traditionally do.

SDN is about central software control over forwarding.  In the most-often-cited “stimulus” model of SDN, you send a packet to a switch and if it doesn’t have a route for it, it requests one from the central controller.  There is no discovery, no adaptive routing, and in fact unless there’s some provision made to handle control packets (ICMP, etc.) there isn’t any response to them or provision for them.  But if you squirt in an IP packet, it would get to the destination if the controller is working.  So, this is an IP network from a user perspective.

If this sounds like a useless exercise, reflect on the fact that a lot of our presumptions about network virtualization and infrastructure modernization rely implicitly on the “black box” principle; a black box is known only by the relationship between its inputs and outputs.  We can diddle around inside any way that we like, and if that relationship is preserved (at least as much of it as the user exercises) we have equivalence.  SDN works, in short.

Over the long haul, this truth means that we could build IP services without IP devices in the network.  Take some optical paths, supplement them with electrical tunnels of some sort, stick some boxes in place that do SDN-like forwarding, and you have a VPN.  It’s this longer-term truth that SD-WANs are probably building toward.  If the interface defines the service, present it on top of the cheapest, simplest, infrastructure possible.

In the near term, of course, we’re not likely to see something this radical.  Anything that touches the consumer has to be diddled with great care, because users don’t like stuff that works differently.  However, there are places where black-box networks don’t touch users in a traditional sense, and here we might well find a solid reason for a black-box implementation.  The best example of one is the evolved packet core (EPC) and its virtual (vEPC) equivalent.

Is a “virtual EPC” an EPC whose functionality has been pulled out of appliances and hosted in servers?  Is that all there is to it?  It’s obvious that the virtual device model of an EPC would fit the black-box property set, because its inputs and outputs would be processed and generated by the same logic.  One must ask, though, whether this is the best use of virtualization.

The function of the EPC in a mobile network is to accommodate a roaming mobile user who has a fixed address with which the user relates to other users and to other network (and Internet) resources.  In a simple description of the EPC, a roaming user has a tunnel that connects their cell site on one end, and to the packet network (like Internet) gateway on the other.  As the user moves from cell to cell, the tunnel gets moved to insure the packet is delivered correctly.  You can read the EPC documentation and find citations for the “PGW”, the “SGW”, the “MME” and so forth, and we can certainly turn all or any of these into hosted software instances.

However…if the SDN forwarding process builds what is in effect a tunnel by creating cooperating forwarding table entries under central control, could we not do something that looked like an EPC behavior without all the acronyms and their associated entities?  If the central SDN controller knows from cell site registration that User A has moved from Cell 1 to Cell 5, could the SDN controller not simply redirect the same user address to a different cell?  Remember, the world of SDN doesn’t really know what an IP address is, it only knows what it has forwarding rules for, and what those rules direct the switch to do.

You could also apply this to content delivery.  A given user who wants a given content element could, instead of being directed at the URL level to a cache, simply be connected to the right one based on forwarding rules.  Content in mobile networks could have two degrees of freedom, so to speak, with both ends of a connection linked to the correct cell or cache depending on content, location, and other variables.

I’m not trying to design all the features of the network of the future here, just illustrate the critical point that virtualization shouldn’t impose legacy methodologies on virtual infrastructure.  Let what happens inside the black box be based on optimality of implementation, taking full advantage of all the advanced features that we’re evolving.  Don’t make a 747 look like a horse and buggy.

We’re coming to accept the notion that the management of functionally equivalent elements should be based on intent-model principles, which means that from a management interface perspective both legacy and next-gen virtualization technology should look the same.  Clearly, they have to look the same from the perspective of the user connection to the data plane, or nothing currently out there could connect.  I think this proves that we should be taking the black-box approach seriously.

Part of this is the usual buzz-washing problem.  Vendors can claim that anything is “virtual”; after all, what is less real than hype?  You get a story for saying you have “virtual” EPC and nothing much if you simply have EPC or a hosted version.  Real virtualization has to toss off the limitations of the underlying technology it’s replacing, not reproduce those limits.  vEPC would be a good place to start.