There is obviously a quiet revolution going on in virtualization. Quiet is good in one sense; the hype on many tech topics is so overwhelming that it’s impossible to get any legitimate issues over the visibility horizon. In another sense, too much quiet can end up submerging those same issues in silence. Let’s take a moment here to try to make sense of the things that need to be sensible.
Virtualization is all about the combination of abstraction and realization. An abstraction is a model of functionality that represents features, capabilities, or “intent” and not the mechanization of those things. In the Java programming language, you could say that the abstraction is an “Interface”. A Java Class that “Implements” an “Interface” is an example of the realization piece. Anything that implements a given interface is equivalent in functionality with respect to that interface. In old terms, an abstraction is a “black box”; it’s known only by its external properties. In modern terms, the abstraction could be called an “intent model”.
The interesting point to inject here is that virtualization can virtualize anything that can be represented by an abstraction and realized by at least one implementation. That means that the abstraction that’s at the root of everything has to be authored around some sense of how you’re going to divide functionality, and compose the units of functionality into something useful on a real-world level. What we are seeing in virtualization today is a forking of the road in terms of the functionality that’s being represented by our abstractions.
The easiest way to author a virtual-something is to make it represent a real-something. A virtual server is a server, a virtual network created by virtual network devices. That’s a reasonable way to start because that approach tends to facilitate the usage of the virtual things you’ve created. You use them like you’d use the real things they were derived from.
The problem with the real-to-virtual transformation is that the properties you end up with are the ones you started off with. If you expect to build services on virtual machines, you think in machine terms. That dissipates one of the major values of virtualization—the fact that you can create anything with useful properties, whether it exists in the real world or not. Why build from the same blocks you’ve had for a couple generation when you can now define your building blocks freely?
The implications of this on the future are fairly clear. Real progress in networking, network services, and the cloud, can come only if we abandon our policy of trying to use abstract logs to build future skyscrapers. Why not abstract something more modern? In the cloud, I think there’s clear evidence that the “virtual machine” thinking of the past is giving way to cloud-as-a-mammoth-virtual-computer thinking. We can, through virtualization, define features and capabilities of that new artifact that map optimally to any new application or service. In networking, not so much.
What I’ve called the network-of-devices approach, the box approach, is still the rule in networking. A virtual network is a network of virtual boxes, not a mammoth-virtual-box artifact created by assembling features. Why? In large part, I think, because of what I’ll call the recipe dilemma.
A box network has the advantage of being self-assembling. Put the boxes together and you get whatever it is that a collection of boxes is supposed to do. A network-of-features approach means that we need a recipe, something that defines how the features are to be assembled. That something, we know today, is likely to be a model structure that represents the relationship among the abstractions—the intent models—that a given service or application represents.
Amazon, whether deliberately or accidentally, has approached the necessary feature-think with the way it’s evolving the orchestration of serverless computing and Lambda. You have a bunch of stateless components, and somehow they have to work together when the primary principle of statelessness is that you don’t know anything about context, thus knowing nothing about coordination or even progress. Some new layer has to impose that harmony from the outside, which is what Step Functions do today and what something like “Step-Plus” will do in the near future.
What makes this possible? Why can Amazon define, for computing, the kind of recipe structure that we don’t have in networking, and can’t seem to even get our arms around? The reason is that a server is itself a kind of virtual thing. It’s a platform on which software features run. There is no inherent application or service functionality with one, it has to be loaded on through software. Assembling something from features follows normal development thinking. Developers and operations types have been experimenting with the transfer of features to hosts for decades. In networking, a router is a router. It has features and functions. We don’t need developers to assemble those features to make them useful, we just need to connect the boxes.
The things that could change networking, advance it to the true virtual form that I think we can call “network-as-a-service” are SDN, SD-WAN, and hierarchical modeling. The former are two steps toward the right place, but along what’s perhaps an imperfect path, and the latter is a perfect path that so far has been difficult to morph into useful steps.
SDN is a way to define “pure forwarding”, a way to define a connection as a series of ordered hops that in some of my early work I called a “forwarding equation”. “Trunk 1 identifier two equals trunk 3 identifier five” is an example of a way to express forwarding at a node. Input goes to output. String a bunch of these together and you create a path. The problem is that connection is only part of a useful network service, and SDN doesn’t provide the rest.
SD-WAN is essentially a way to abstract a network completely. From the outside looking in, an SD-WAN is a virtual private network with the potential for explicit connectivity. It’s the logical top layer of a service abstraction where the “service” is not the service provided by individual boxes, but rather the service created by the collection of boxes (or features) overall. The problem is that while a service of a network is more abstract than a service of boxes, the result still ends up being constrained to a degree on what those boxes could create as a network.
Back when the TMF started its Service Delivery Framework (SDF) work, it took a step along what might have been the perfect path to virtual network services. A service was a collection of features assembled in an abstract model, not a network of boxes. The problem was that it was so difficult to unravel what was emerging that a big Tier One contacted me on the initiative and said “Tom, you have to understand that we are 100% behind implementable standards, but we’re not sure this is one of them.” My first ExperiaSphere project was aimed at creating a Java framework that could define a network service as a collection of features by writing a Java application using the appropriate abstractions (Classes and Interfaces).
A better approach is to represent abstractions using a data model, which is kind of what the TMF had led to with its SID and NGOSS Contract, and which was my approach in the second phase of ExperiaSphere. OASIS TOSCA is the best example of a standard modeling approach that I believe can be used effectively to define application/services as an assembly of features. TOSCA isn’t a perfect solution as it stands, but it clearly can turn into one eventually, if it’s applied properly.
Which, so far, it really hasn’t. I think that even the most feature-thinking out there are still just a bit mired in box-think. Composed services, in a modeling sense, are over a decade old, but the application of modeling to the realization of connection, control, and management features hasn’t really developed much. What may be the deciding factor is less vendors or even operators than cloud providers. As I’ve noted in the last couple blogs, players in the cloud have the same virtual network needs as everyone else, and they’re much better at getting out of destructive legacy mindsets than the vendors or operators. I think Google, in particular, may be working through something revolutionary, and that will then influence virtualization overall.