Last week, and in prior blogs, I mentioned the fact that virtual network functions (VNFs) have to be recipients of NFV services, and that the sum of these services may determine the ease with which current network code could migrate to become VNFs. It’s also a determinant in the portability of VNFs across multiple platforms, of course. Today I’d like to talk about what “NFV services” to VNFs are, and how they might develop.
The foundation of all services offered to VNFs are the execution platform services that actually provide for VNF hosting. If a piece of network code runs on Linux with a given set of middleware tools, and if this combination was needed in a machine image to deploy the VNF in a virtualization or cloud environment, then that stuff is platform services. The responsibility of NFV in this case is to permit the assembly of correct machine images or similar artifacts representing the VNFs, and deploy them on suitable resources. This process is well understood because it happens in every virtualized data center, every cloud.
The only thing that NFV adds to the mix is possible refinement of the “suitable resources” stuff. Suitability has two basic dimensions—what does the VNF need for technical execution and what optimization policies might be applied to select technically suitable resources. We’ve had lots of discussions about how it would be really great to have all manner of policy-based decision-making here, but remember that anything you do to optimize the use of a resource pool makes the effective size of the pool smaller and the efficiency of your virtual resources lower. I personally think we’re gilding the lilly here in most discussions. Yes, there will be some broad optimization policies imposed, but the business case for NFV simply can’t afford excessive complexity at the optimization level.
The next layer to consider in our NFV services picture are connection services. VNFs are communications software, and thus they expect to run in a given network framework. Most, for example, think they are running inside a subnetwork that’s then connected through a default gateway to a broader WAN (or the Internet). They assume that they have the DHCP and DNS services needed for typical application networking.
This is the area where I think current thoughts about NFV architectures need the most work. We’ve talked about “forwarding graphs” as diagrams representing the flow of information between VNFs as though these graphs were the solution to connection services. In point of fact, in virtually all the network applications today, the thing that determines what a given application forwards traffic to or gets traffic from is the application logic. We don’t pipeline applications today, we simply connect them onto subnetworks and let the applications “find” each other through normal IP means. They then communicate as they were written to communicate. Thus, the priority in connection services is to replicate the network environment in which the VNFs are expecting to run. If that happens to include pipelines/tunnels between components, fine, but in virtually no case would that be sufficient. You have IP VNFs? Then you need to put them into an IP framework or they won’t work.
The next and final layer to consider is also the most complicated—network-addressable services. Some of these (DNS, DHCP, gateway) are lumped into my connection services category because they are a normal part of “the network”. However, applications may expect to contact other network-connected application or device features and services to run. They may also be expected to expose some of their own features as network-addressable services to others.
Management offers us the best example of network-addressable services, but it also demonstrates how we have to interwork among our three elements of VNF services to get something to actually work when deployed.
When an application is loaded, it gets most of the services it uses through the mechanism of local APIs that either address operating system or middleware features. These features would often include management interfaces to provide the application some information about local resources, so “management” has to include some resource-to-VNF views. The complication here, as it always is with shared resources, is that no application/VNF can be allowed to have unfettered access to shared resources or they contaminate the behavior of other applications.
The connection of the application is the next step, and in traditional IP networking applications don’t set up their own networks, they are deployed onto something and connected into something with a parallel set of tools. OpenStack deploys via the Nova APIs and connects via Neutron. But while we have to start our consideration of connection services with what the application code that became our VNF expected, we also have to recognize that in many cases multiple VNFs will create a “virtual device”. In the real world, the components of such a virtual device wouldn’t be accessible from the outside, and we can’t let them be accessible from the inside either. Thus, we have to assume that somehow service data paths and address spaces are separated from intra-package among-the-VNF pathways.
I’m of the view that cloud deployment models (like TOSCA from OASIS) represent the most logical way to describe VNF platform and connection services because NFV will be a kind of cloud-plus, an extension of cloud capabilities to include improvements in management and multi-tenancy but still fundamentally a cloud. We should not be describing new stuff like “forwarding graphs” to describe something that already has software tools working to support an alternate model in the real world. So VNF management has to look more like cloud management at both the platform and connection services level.
But what about “NFV services?” Does NFV demand that we create a set of services (presumably primarily network-addressable services) that are exercised by VNFs to support NFV’s MANO functions? If we make that decision, then we have a couple of problems. First, current code wouldn’t contain the references to these NFV services, so we’d have to retro any logic we expected to use as the basis for VNFs. That could mean either abandoning or forking a lot of open-source projects. Second, we can’t incorporate references to NFV services if we don’t define them and provide a standard way of interfacing them, without making VNFs non-portable. I believe that a baseline rule for NFV should be my NFV implementation can run, as a VNF, any current network application. Yes, vendors could provide optional NFV services of their own, but they should always be able to support the baseline—the open-source inventory of network tools we can now run in the cloud or in virtualized data centers.
So that’s a picture of NFV from the VNF perspective. I’d suggest that operators, journalists, analysts, and everyone who’s interested in building or using NFV tools think about these points. If we lose sight of the basic mission of NFV—running VNFs—we lose everything.