As I’m sure regular readers of this blog know, I don’t really like where the NFV ISG is today. I do like some of the places it’s been. One place is the notion of a kind of modular virtual infrastructure. The concept of “virtual infrastructure” and its management (via, to no surprise, a VIM) has evolved within the ISG, but it’s the start of something important, which is a concept of modularity in carrier cloud. In fact, it raises an interesting question about what “infrastructure” means in the age of the cloud.
Think of a resource pool as a collection of workers. If every worker in your collection has different, specialized, skills then when you need to assign a task, you probably have one option only. That’s losing the spirit of a “pool” of resources, right? On the other hand, if the collection is made up of a bunch of general handy types, then you can use anyone for any task, and you have a useful pool to work with. That has to be the goal with any pool concept, including carrier cloud.
When the NFV ISG got going, they saw “resources” as the virtual infrastructure, represented by a VIM. They seem to have evolved a bit in their thinking, accepting that there might be multiple VIMs and even that “VIM” might be a special case of “infrastructure manager” or IM. If we accept both these principles then we have a starting point on defining what carrier cloud needs.
Cloud computing a la OpenStack, or container computing using Docker Swarm or Kubernetes, defines the notion of resource clusters, which are effectively pools into which you can deploy things. The presumption is that the resources in a cluster are interchangeable, which means that you don’t have to put in a lot of logic to decide what resource to pick. NFV, of course, presumes that there are factors to determine what resource would fit best. Those factors are generally related more to things that traditional resource assignment doesn’t look at, and in practice are likely to relate to factors about the location of the resource, ownership, connectivity, and so forth. It would be convenient, and fairly valid, to say that all these factors could be applied above the cluster process. Pick a cluster based on esoteric criteria, then let traditional software pick a host.
This, I think, opens a model for “virtual infrastructure management” and for creating a modular notion of carrier cloud hosting. Ideally, a “virtual pod” might be created by combining servers, platform software, and local cluster control for deployment and redeployment. This pod would be represented by a VIM, and the goal of the VIM would be to present the pod as an intent model for the function of “something-hosting”. The “something” would represent any specialization or characterization of the capabilities within the pod.
In this approach, the VIM like any intent-model structure, would be a structure, a hierarchy. This would allow selection of specific requirements to be made through the dissection of the requirements for the application or service overall. The same VIM would provide a management interface in the form of input parameters and an SLA, and output status indications against the SLA accepted. That would combine to make the pod self-contained; attach it as an option to a higher-level intent model element and it should harmonize management and deployment at the same time.
The generalized “IM” notion could work the same way. Network services, meaning connection services, are obviously part of “infrastructure”, and so they could be presented as resources through an infrastructure manager. That would be true whether they were created by hosting something on a server, or by using a community of cooperating devices, a traditional IP or Ethernet network.
The approach to building services up from “service infrastructure” could be expanded beyond hosting, even beyond network connections, to envelope all of the functional pieces. One of the things I realized early in the world of NFV was that there were functional components of services that were not discretely provisioned in per-user, per-service, but shared. IMS and EPC are examples of such functional components. Every cell user, every call, doesn’t get a unique instance of IMS and EPC. In effect, these elements are shared resources just like a server pool is, and so they should be represented by an IM and composable as a shared resource. If we add in the ability to build “foundation services” (as I called them back in 2013) and then compose them into other services, we have a fairly good picture of what virtual infrastructure should be.
A logical approach to virtualized infrastructure is good, but not good enough. If we really want to frame NFV and carrier cloud in an agile and easily integrated way, we need to think in terms of “containers”. Yes, I mean that the cloud/Docker/Kubernetes concept of containers would be good, but I also mean a bit more. A container in the broadest sense is a package that contains an application component and links it with its deployment instructions and parameterization. Think of this for a moment in the context of virtual network functions (VNFs). A VNF today tends to be a fairly random piece of software, which is why integration and onboarding are reported to be a pain. We could fix that by defining a standard VNF container, something that linked to all that variability on the inside but presented a common set of interfaces to the outside world.
An intent model in the real world should be a “class model” that defines the general characteristics of a given model based on a specific feature, function, virtual device, or role. In infrastructure, the ISG VIM model works as such a class model, but in the VNF world nobody has taken the time to define what kind of intent model structure should exist for a given feature/function. For example, we probably should have a “superclass” of “data-path-function”, which could then be sub-defined as “firewall” and other categories, and then further defined based on vendor implementation. If you want to sell a firewall VNF, then, it should be something that implements “firewall”, and management and deployment practices for anything that does that should then work for all the implementations.
I’ve talked about refining the intent-model notion by defining class structures for types of functions, and making this abstract notion into a container-like element would mean creating a software implementation of the abstract function. Every vendor, having a responsibility to frame their own virtual function offering in one or more “intent classes” would then present a self-integrating element. Best of all, this would be a totally open approach.
There are some players introducing at least a piece of this approach. Apstra has intent-based infrastructure, for example, and EnterpriseWeb has a representational modeling approach that facilitates onboarding VNFs through a kind of “virtual container”. Modeling languages/tools like those based on OASIS could also be used to define both the “virtual container” and “IM” pieces of the model, and also to structure the hierarchy that I think is essential to a complete implementation.
NFV has evolved under market pressure, but the pace of accepting early and obvious limitations seems much slower than necessary. I’ve been a part of these discussions for five years now, and we’re still just coming around to the obvious. Vendors may be moving faster than bodies, even than open-source groups. Even with the disappointing progress of architectural thinking, we see some examples of new-think across the board. This can all be done, in short, and if it were, it might help not only dealing with the NFV bugaboo of integration, but also get NFV thinkers thinking in the right direction about what virtualization means.