We’ve gone through a whole series of industry events that swirl around the notion of the next-gen network. I’ve blogged a bit about the TMF, NFV, SDN, and fiber conferences, and as people comment on the LinkedIn posts relating to the blogs it’s interesting how often the discussions end up on the topic of “operationalization”. This is a term I use (I don’t know if it’s fair to say I invented it as some have told me; I never checked) to describe the tuning of network technology to suit modern operations requirements. Every network revolution is an operations revolution, or it should be. That’s not been happening, and that’s a major and almost universal disconnect that all our hero technologies have to address, or they fail.
Changes in operations are inevitable when you change what a service is. In the old days, services were created by deploying service-specific technology. Manage the boxes, manage the services. Billing, provisioning, and all of the human and business processes of operators could drive to what was basically a singular destination—a lineman or a truck roll or a provisioning task. Order of service, order of network. IP convergence broke the 1:1 notion of services/networks because you now had an increased inventory of generalized infrastructure that handled basic connectivity and transport and then a set of service silos that imposed the features per service.
This is the point where “service management” and “network management” took their separate routes. Interestingly, OSS/BSS didn’t take either of the service/network paths, it stayed focused on the administrative processes of networking. This is why, IMHO, the TMF came up with the concepts of “Product Domains” and “Service Domains” and “Resource Domains”; operations processes now needed to be a bit multi-personality at some level because of the diverging notion of service and network.
Most operators have successfully glued the administrative, service, and network processes together in an adequate sense, but nearly all operators have been telling us all along that their accommodations haven’t been efficient. Some operators take weeks to provision a simple VPN, and most operators will say that the process takes at least ten times as long as it should. They also say that their overall operations costs are far higher than they can tolerate, and they view them as being far higher than they need to be. So arguably the same pressures that are driving things like SDN and NFV—which are pressures to reduce management costs at a pace at least as fast as revenue per bit is falling—should be driving operations modernization. They aren’t, or at least have not been.
All that is coming to a head now because cloud computing services, software-defined networking, and network functions virtualization all incorporate the critical concept of virtualization. A virtual environment manipulates abstractions that convert to resource assignments as needed. This breaks another level of coupling between services and networks, and also threatens the administrative operations relationship with both . This is because services are now defined as relationships of abstract things and only real things can carry traffic and earn revenue.
To me, there’s a logical truth here. Administrative and business processes for operators are focused on manipulating service relationships. Service relationships in the virtual network of the future are based on orchestrated high-level functional abstractions that create the services. These abstractions are then converted into resource commitments by a second-level process. So there are multiple levels of orchestration implicit in next-gen virtualization-based networks. SDN defines, or at least should define, how the functional abstractions that represent connectivity and transport are realized on infrastructure. NFV could, or should, define how functional abstractions of any sort are realized by hosting software components and interconnecting them.
But anything that’s committing shared resources to specific service missions is also going to have a problem of management visibility. You have to record the resource-to-service relationships you’ve created when you orchestrate something or you can’t provide resource state to service consumers. Even knowing the address of a shared resource MIB isn’t enough, though, because 1) you have to protect resource MIBS from commands that would alter their functional state relative to other users, and 2) you have to somehow present a MIB for the abstract object that the orchestration created because that’s what the management system thinks is being managed, however the connection is made. You could never reflect the resource details of an NFV deployment of a firewall to a management system for firewalls; they’d not know what to do with the variables.
When we consider all of this, it’s hard not to assert that there can’t be something like “NFV orchestration” or “NFV management” except in the context of a higher-layer set of orchestration and management processes. One path to that goal is for the ISG to define a model of orchestration and management that, because it’s “virtual”, can envelop real devices or real management/control interfaces as much as virtual functions. Another path is for another body to publish a higher-level model that can wrap around NFV.
I think that a higher-level management model has to start with the notion of “objects” that represent our functional abstractions. These abstractions could represent NFV deployments, legacy control interfaces, even legacy devices. They could also represent collections of lower-level objects so you could build up a service by assembling functional components at several levels. The TMF has envisioned that in its notion of “Resource-Facing” or “Customer-Facing” services that can be orchestrated in a structured way—in theory. This orchestration has to not only decompose the object, it has to record the relationships—the “bindings”—between the components of each object, down to the atomic resource connections. Then it has to create some management image of each object that makes sense to a management system. Why? Because you can’t traverse management nodes in a problem-determination process if some of the places where function becomes structure are totally opaque. What is the state of a composite object, no matter how that object is created? It’s the composite state of the components of that object, so that has to be known, and known explicitly, or you are N..O..W..H..E..R..W.” Which I where I assert that SDN and NFV strategies generally are today. Till we get this right, we are just dabbling in virtualization and we’re being naïve if we believe anyone will deploy SDN and NFV on a large scale.