Every vendor wants to sit astride the critical value propositions, and in networking that’s particularly true. With capital spending under pressure, it’s crucial to have some strong value propositions you can spout to impress buyers. The problem has been that “value” really means either cost or revenue, and much of networking is insulated from both these areas by the structure of services. But the problem is increasingly in the past, because drive to differentiate is creating innovative solutions.
Traditional networks build upward from physical media through a series of “layers”, each of which sees only the services/features of the layer below. The user experience is created at the top layer, which for the network (despite the media hype to the contrary) is “Level 3” most times, and Level 2 for the rest. Operators tell me that there are very few services asserted at other layers. They also tell me that almost three-quarters of their operations costs are incurred at the service layer, the top.
Virtualization has thrown out the basis of the traditional division of features, because virtualization allows for the creation of a virtual network with virtual layers. For example, you can use virtualization to build a “virtual wire” from one continent to another, transiting all manner of facilities. It still looks like a wire to the higher layers, but the traditional mission of Level 3, which is to tack physical media spans together to create routes and paths, really isn’t necessary any more.
Lower-layer network features that replace features normally provided at a higher layer are subtractive feature examples; your new features subtract requirements from higher layers, and by doing that potentially simplify them. If, for example, we had total resilience at the optical layer, would we be able to eliminate not only error recovery but even dynamic routing at higher layers?
Another thing virtualization could do is support a new model of services. NFV is an example of what could be called feature addition; you can build services that add features to basic connectivity. These features could be related to connectivity, or they might be elements of applications, as would be the case with cloud computing.
Finally, you could think of parallelism as an attribute of services. Today we get most IP services by partitioning IP infrastructure. Might we instead use virtual wires to create truly independent subnetworks at the lower layers, and then pair these with hosted instances of switches and routers, or any other set of feature-driving elements, above? Why not?
Virtualization isn’t the only driver here, either. Every lower layer in the network, and every vendor who has products there, has aspirations to add features and capabilities. These features generate impacts that fall into these same three categories. Chip vendors want programmable forwarding, which is another set of features whose value proposition demands they change how “services” would be built.
How do we build them? Each of the virtualization impact models require different accommodations, with a final common element.
Subtractive models are the simplest and most complicated at the same time. They’re simple because a problem removed at a lower layer automatically simplifies the higher layers. If an error doesn’t occur, then higher-layer steps aren’t required and operations is automatically less complicated and expensive. They’re complicated because full exploitation of subtractive feature benefits requires that you remove the features from the higher layer. As an example, SDN could take advantage of subtractive reliability and availability features at the optical layer because it has no intrinsic capability to recover from problems—you’d have to include that capability explicitly. With proper lower-layer features you simplify SDN software and presumably lower cost.
The additive models are easier to understand because we have examples of them in both the cloud and NFV. The challenge here is to identify incremental features that are valuable, and then cull them to eliminate those that are not expensive enough to justify hosting in the network. Business firewalls are expensive, so you can probably host them. Residential features of the same type are part of a device that might cost forty bucks and includes the essential WiFi feature. Host that?
The parallel model is really all about benefits and costs. We build VPNs today using network-resident features (MPLS). We could build them by partitioning capacity through virtual wires at the optical or optics-plus level, and then add in the L2/L3 features using hosted instances of routing/switching or simply by using SD-WAN features. Would the cost of this alternative approach be lower in the long run? Would the security benefits make the service inherently more valuable to buyers?
The common element in all of this is that future services are less likely to be “coerced” from the cooperative behavior of a system of compatible devices and more likely to be deployed as a collection of features. You start with infrastructure that is inherently more service-independent and make it service-specific by targeting the services you can sell and then deploying the behaviors that make up those services.
This requires a violation of the old OSI assumption of building on lower layers, because with the new model the goal may well be to exploit and thus to expose features of lower layers. We’re seeing some of the new debate on this point already, because if you have (for example) the ability to expand transport bandwidth at Level 1, do you allow service creation to do that, or do you wait until the sum of higher-layer capacity demands signal a need?
If you decide to let service creation cross OSI layers, you buy into a much more complex approach to service management today, but one that better prepares for those three virtualization feature models down the road. If we want to see every aspect of networking freed to develop its own optimum cost/benefit relationships, then we have to free every aspect of networking from traditional constraints—constraints applied at a time when we had no novel capabilities to exploit, so we had nothing to lose.
Experiences are the way of the future, service-wise. We already know that universal connectivity provides a platform for enriching our experiences by supporting easy delivery. Easy delivery alone isn’t the experience, though. Eventually, we will have to see all services as composed feature sets married with connectivity. Eventually, service management will have to be able to model those complex relationships and make them efficient and reliable. The sooner we look at how that’s done, the better.
This is IMHO the strongest reason to be looking at cloud-intent-model-driven service models (like OASIS TOSCA) rather than models that were derived to control connectivity. The most connective network you could ever have is simply an invitation for disintermediation if what the buyer wants is the experience that the network can deliver. Those experiences can always be modeled as functions-plus-connectivity, and that looks like a cloud and not like a network.