Is Virtualization Reality Even More Elusive than Virtual Reality?

Software, in defining networks, shouldn’t be expected to cull through a lot of details on network operation.  But yes it should.  SDN will be the agent of making applications a part of the network itself.  No, that’s NFV’s role, or maybe it’s nobody’s role because it’s not even a goal.  If you listen to the views of the future of networking, you get…views in a plural sense.  We have no consensus on a topic that’s critical to the way we build applications, networks, data centers, maybe even client devices.  We have no real idea what the demands of the network of the future might be, or what applications will demand of it.

I’ve come to believe that the issues we face in networking are really emerging out of the collision of two notions; elastic relationships between application components and their resources,  and the notion that functionality once resident in special-purpose devices could be exported to run on servers somewhere.  Either of these things change the rules; both change everything.

When we build networks from discrete devices, we connect them through well-known interfaces, much the way a kid builds castles with interlocking blocks.  Each box represents a unit of functionality and cooperation is induced across those standard connections.  Give the same kid a big heap of cement and you’re not likely to get as structured a castle.  The degrees of freedom created by the new tools overwhelm the builder’s natural abilities to organize them in a useful way.

When we pull functionality out of devices, we don’t eliminate the need to organize that functionality into cooperating systems we’d call a “service”.  In fact, make that harder.  First, there was a lot of intra-device flow that was invisible inside the package and now is not only visible but has to be connected somehow.  Worse, these connections are different from those well-known interfaces because they represent functional exchanges that have no accepted standards to support them.  And they should never be exposed to the user at all, lest the user build a bad image of a cow instead of a good castle by sticking the stuff together in the wrong way.  Virtualizing network functions requires that we organize and orchestrate functionality at a deeper level than ever, and still operationalize the result at least as efficiently as we’ve always done with devices.  Any operator or enterprise network planner knows that operationalization complexity and cost tends to rise based on the square of the number of components, and with virtualizing functions we explode the number of components.

And just when you’ve reconciled yourself to the fact that this may well suck, you face the second issue, which is that the cloud notion says that these components are not only divorced from those nice self-organizing devices, they’re scattered about in the cloud in a totally unpredictable way.  I might have a gadget with three functional components, and that creates three network applications to run.  Where?  Anywhere.  So now these things have to find one another at run time, and they have to be sited with some notion of optimizing traffic flow and availability when we deploy them.  We have to recognize a problem with one of these things as being a problem with the collective functionality we unloaded from the device, even though the three components are totally separated and may not even know about one another in any true sense.  Just because I pump out data downstream doesn’t mean I know if anyone is home, functionally, out there.

Over the years, as network equipment evolved from being service-specific to being “converged”, we developed a set of practices, protocols, and tools to deal with the new multiplicity of mission.  We began to gradually view service management as something more complicated than aggregated network or device management.  We began to recognize that providing somebody a management view into a virtual network was different than such a view into a real one.  We’re now confronting a similar problem but at a much larger scale, and with a timeline to address it that’s compressed by market interest, market hype, and vendor competition.

That’s the bad news.  The good news is that I believe that the pieces of the holistic vision of cloud, SDN, and NFV that we need to have today are all available out there.  We don’t have a pile of disorderly network cement, but rather a pile of Legos mixed with Lincoln Logs, Tinkertoy parts, and more.  We can build what we need if we organize what we have, and that seems to be the problem of the day.  The first step to solving it is to start at the top to define what applications need from networks, to frame the overall goals of the services themselves.  We’ve kind of done that with the cloud, which may be responsible for why it’s advanced fairly fast.  We’ve not done it with SDN (OpenFlow is hardly the top of the food chain) and we’re not doing it with NFV, which focuses on decomposing devices and not composing services.  We’re groping the classical elephant here, and while we may find trees and snakes and boulders in our exploration we’d better not try to pick fruits or milk something for venom or quarry some stones for our garden.  Holistic visions matter, sometimes.  This is one of them.

Leave a Reply