Network functions virtualization (NFV) is supposed to be about running network service features on commodity servers using “virtualization”. While the vCPE edge-hosting movement has demonstrated that there’s significant value in running virtual functions in other ways, virtualization and the cloud is still the “official” focus of the ETSI work, and what most think will be the primary hosting mechanism in the long term. The infrastructure that makes up the resource pool for NFV is critically important because most NFV dollars will be spent there, so here in the second of my series on NFV openness, we’ll look at those resources.
In the ETSI E2E specification, the pool of resources that are used to host and connect virtual functions is called Network Functions Virtualization Infrastructure, or NFVI. Services, described in some way to the Management and Orchestration (MANO) piece of NFV, are supposed to be committed to NFVI through the mechanism of one or more Virtual Infrastructure Managers (VIMs). If we looked at NFV from a software-development perspective (which, since it’s software, we should), the VIM is the abstraction that represents resources of any sort. That means that NFV management and orchestration really isn’t dependent on NFVI directly; it shouldn’t even know about it. It should know only VIMs.
In the software world, NFV specifications would have made clear that anything is fine for NFVI as long as there’s a VIM that represents it. We shouldn’t be worried about “NFVI compatibility” except in VIM terms because it’s the VIM that has to create it. And that begs the question of what the VIM says things should be compatible with.
Deployment of virtual functions on servers via VMs or containers obviously would look a lot like what a cloud management software stack like OpenStack would do. In fact, one could argue that a baseline reference for a VIM would be OpenStack, and in particular the hosting (Nova) and networking (Neutron) elements of it. OpenStack assumes that you’re going to deploy a network-connected (cloud) service by defining the network elements and placing hosting points in them. VIMs that did nothing but deploy and connect virtual functions could be little more than OpenStack or another cloud stack.
This limited view presents some problems because a “service” is almost certainly more than just a connected set of virtual functions. There are legacy elements, things like routers and switches and DNS and DHCP—a bunch of things that are needed to run software that’s designed to offer network-related features. Thus, a VIM should be a superset of OpenStack or other cloud stacks, doing what they do but also handling the legacy pieces.
A VIM should also present a very specific and implementation-independent abstraction of the things it can do—back to what’s now popularly called an “intent model”. Whether I want to deploy a service chain or an IP VPN, I need to describe the goal and not the implementation. That means that VIMs would be responsible for creating a set of specific abstractions—intent models—on whatever infrastructure they represent. If the abstraction “vCPE” or “VPN” are defined by a VIM, then they have to be defined in the same way no matter how the VIM realizes them. If that’s true then any NFVI can be freely substituted for another, providing the VIMs for each recognize the abstractions you plan to reference.
This is how edge-hosted VNFs should be supported; as a specialized VIM that makes deployment on a customer-sited device look like deployment as part of a cloud. It’s also how dedicated servers, diverse cloud stack architectures, containers, and so forth should be supported. It’s a totally open model, or at least it can be.
What threatens this nice picture is any of the following:
- The notion that there can be only one VIM in an implementation. If that’s the case, then every vendor would have to be supported under that VIM, and since the VIM would likely be provided by a vendor that would be unlikely to be true. Only by recognizing multiple VIMs “under” MANO can you have open infrastructure.
- Any implementation-specific reference made in the abstraction(s) that describe the VIM’s capabilities. If, for example, you require that an abstraction incorporate a specific policy structure or CLI parameters, you’ve built a bridge between the model of the feature (the VIM abstraction you’re using) and a specific implementation. That forecloses the use of that description with other VIMs.
- An inconsistent way of specifying the abstractions or intent models. If one vendor says that the VIM abstraction is “VPN” and another says it’s the specific device structure that makes up a VPN, then the two models can’t be used interchangeably.
All of this is important, even critical, but even this isn’t sufficient to insure that VIMs and NFVI are really open. The other piece of the puzzle is the management side.
Even if you can deploy a “vCPE” or “VPN” using a common single model and a set of compatible VIMs, you still have the question of management. Deployment choices have to be collected in a common model or abstraction, and so does management. Any VIM that supports an abstract model (vCPE, etc.) has to also support an abstract management model in parallel, and whatever management and operations processes are used in NFV or in OSS/BSS/NMS have to operate on service resources through this abstraction only.
All abstract NFV features either have to be managed the same way, based on the same management abstractions, or management processes have to be tuned to the specific deployment model. This dilemma is why the ISG ended up with the notion of VNF-specific VNF Managers. But if we solve the problem of differences in management by deploying customized VNF managers in with the VNFs, we still have to address the question of where they get their data on resource state. We also have to deal with how they do things like scaling in and out. These should be abstract features of the VIM models, not scripts or hooks that link a VNF Manager directly to the internals of a VIM or NFVI. The latter would again create non-portable models and brittle implementations.
I think the message here is simple. Infrastructure is always represented by an intent model or models, implemented through a VIM. The model has to define the function (“VPN”), the interfaces/ports, and the management properties, which would in effect be a MIB. The VIM has to transliterate between these MIB variables for an instantiated model, and the resource MIBs that participate in the instantiation. If you have this, they you can deploy and manage intent models through a VIM, and if two VIMs support the same intent model they would generate equivalent functionality and manageability. That’s what an open model has to work like.