NFV management has never been my favorite part of NFV, and I’ve groused about it here fairly regularly. It’s probably time to talk about the issues in more detail, and so I’m going to do an as-yet-undetermined number of blogs in a series about the issue.
To get this straight, we have to set the stage. NFV presumes that virtual network functions (VNFs) are collections of components that are hosted and connected during the deployment process by the NFV Orchestrator. The management, meaning lifecycle management, of this collection is the responsibility of the VNF Manager or VNFM.
VNFs would have to be collected in some sort of subnetwork, and this is shown in the ETSI End to End Architecture Document’s Figure 3. The easiest way to think of this would be as an IP subnet, though no specific reference to a network structure is provided in the document. I’m presuming one here because it’s difficult to talk about management issues when you don’t have any specific way to reference the things you’re managing.
In our hypothetical NFV subnet we’d have a bunch of hosted software components (VNFCs) that are linked somehow. The ETSI material calls the relationship a forwarding graph, but I’m not sure that doesn’t presume simple linear service chains. Even if you had chains, you’d need to have something to chain through, meaning a network service that offered connectivity to the component. Using this the elements inside a subnet would all be able to talk to each other, presuming they had an address reference. Our components will also have to be visible to the real world, at least in part. The ETSI Figure 3 shows endpoints connected to VNF1 and VNF3, which presumes that these endpoint connections on the VNFs are “visible” in the service address space, outside the VNF subnet.
Security, compliance, and sanity seem to dictate the presumption that our subnet is based on something like an RFC 1918 address space (assuming IPv4), so the VNFs would all be invisible to the outside world. To make some of the ports on VNFs visible, we’d have something like NAT to translate between one of the private addresses and a public address. We’d also need a DHCP function to assign addresses and a DNS to allow the components to see each other. If we do this, then every VNF lives in its own private universe, secure from visibility to other VNFs and even to its own service address space. It shares only ports it is designed to share, and only to other subnets it specifically elects to link with.
So where we are is as follows: Something like Figure 3 would probably be set up by defining a private subnet with its own DNS and DHCP, and with NAT to convert between the internal addresses it wants to publish in the service data space, and addresses in that space.
We’re not done with subnets, though. We have to be able to deploy this stuff, right? Thus, we have to presume that there is an infrastructure network where all of the resources live. We also have to assume that either in this network or in yet another network we have all the elements of NFV software, which means MANO, EMSs, and OSS/BSS connections (the actual OSS/BSS could be elsewhere but we have to be able to reach it).
You’re probably wondering why I’m getting into this, and the answer is that the framework we presume has to be there to deploy NFV will also have to support management of NFV services once they’re deployed. We have to be able to harmonize the role of VNFM within this structure, and if we have any issues we have to get them addressed.
Management starts with the presumption that there is, included with a VNF, an Element Manager that performs all the VNF’s typical management functions. This EM links with the VNFM for resource information and to provide lifecycle management. The VNFM would go to the Virtual Infrastructure Manager to deploy something, such as a scale-out. However, it also appears that the NFV Orchestrator also goes to the VIM for deployment. To start with wouldn’t it be logical to say that “deployment” was a lifecycle stage? Yes, but if an EM has to request lifecycle management that can’t happen till the EM, which co-deploys with the VNF, presumably, is actually deployed to do the requesting.
Apart from this we have some challenges of addressing and security. It’s reasonable to assume that the EM talks to the VNFM through one of those NATted interfaces, so we can at least make the connection. As long as there’s some record of the EMS address so that the VNFM could contact it, presuming it needs to, we are fine.
An issue arises if we look deeper into the VNFM proposal. There’s a goal of supporting multiple VNFMs, so that VNF-specific VNFMs could be offered. The reason given is that the task of lifecycle managing a VNF could be pretty specialized, and that may be true. However, we now have to look at the addressing and security issues.
If a VNFM is provided by a VNF vendor, where does it live? You have three options. You can put it inside the VNF, inside the private subnet where MANO and the rest of the software lives, or in some third disconnected subnet. What are the implications?
If you put the VNFM inside the VNF then we’re letting VNFs manage their own lifecycles, allocate resources, etc. We have to give the VNFM a link to the VIM, which means that the VNF can “see” infrastructure directly and control real resources. I think this is a serious security and stability problem.
If we put the VNFM inside the MANO subnet, we’re letting vendors add service-specific software inside NFV’s control software, where there are no barriers to what it could do. That is IMHO a far worse issue with security and stability.
If we put the VNFM in its own subnet we’re still giving that subnet access to a VIM, and while that could be made more secure than the first (and second) options, it’s still not ideal. The VNFM still can directly control resources.
My conclusion here is that we need to be looking at NFV deployments like cloud applications in a multi-tenant world. Amazon and Google both provide a mechanism much like I’ve described to create subnets where components are hosted using private IP and then use NAT or “elastic IP” addresses to map to addresses visible outside. We have to be able to draw a picture of an NFV deployment as a set of IP subnets which interconnect in some way. Google offers such a picture in its own Andromeda architecture. If we can draw the subnet structure of NFV we can see at least what connections between private spaces and public spaces are, and whether these connections are sufficient to permit NFV software to function as needed and secure enough to be acceptable by operators.
Functionality is yet another matter. The best way to look at how this would work is to look at a deployment, a horizontal scaling, and a fault response. That’s what I’ll do in later blogs.