The Critical Open-Source VNF: How We Could Still Get There

One of the most logical places for operator interest in open-source software to focus is in the area of virtual network functions (VNFs).  Most of the popular functions are available in at least one open-source implementation, and operators have been grousing over the license terms for commercial VNFs.  It would seem that an open-source model for VNFs would be perfect, but we seem to have barriers to address in making the approach work.

VNFs are the functional key to NFV because they’re the stuff that all the rest of the NFV specifications are aimed at deploying and sustaining.  Despite this, VNFs have in some sense been the poor stepchild of the process.  From the first, everyone has ignored the fundamental truth that defines VNFs—they’re programs.

Virtually all software today is written to run on a specific platform, with hardware and network services provided through application program interfaces (APIs) presented either by an operating system or by what’s called “middleware”, system software that performs a special set of useful functions to simplify development.  In some cases, the platform (and in particular the middleware) is independent of the programming language, and in others it’s tightly integrated.  Open-source software is no exception.

A convenient way to visualize this is to draw a box representing the program/component, and then show a bunch of “plugs” coming out of the box.  These plugs represent the APIs the program uses, APIs that have to be somehow connected to services when it’s run.  Let’s presume these plugs are blue.

When something like NFV comes along, it introduces an implicit need for “new” middleware because it introduces at least a few interfaces that aren’t present in “normal” applications.  If you look at the ETSI diagrams you see some of these reference interfaces.  These new APIs add new plugs to the diagram, and if you envision them in a different color like red, you can see the challenge that NFV poses.  You have to satisfy both the red and blue APIs or the software doesn’t run.

A piece of network software of the sort that could be turned into a virtual function also has implicit external network connections to satisfy.  A typical software component might have several network ports—one for management access, one as an input port and another as an output port.  Each of these ports has an associated protocol—for example, a management port might support either IP SNMP or a web API (Port 80).  Data ports might have IP, Ethernet, or some other network interface (to connect to a tunnel, for example).

Then there’s what we might call “implicit” plugs and sockets.  Virtual functions have a lifecycle process set, meaning that they have to be parameterized, activated, sustained in operation, perhaps scaled in or out—you get the picture.  This lifecycle process set may or may not be recognized by the software.  Scaling, for example, could be done using load balancing and control of software instances even if the software doesn’t know about it.  But something has to know, because the framework has to connect all the elements and work, even when there are many components with many plugs and sockets to deal with.

What this means is that when a piece of open-source software is viewed as a virtual function, it will have to be deployed in such a way that all the plugs from the software align with sockets in the platform it runs on, and all the sockets presented by NFV interfaces line up with some appropriate plug.  How that might happen depends on how the software was developed.

If we presume that somebody built an open-source component specifically for NFV, we could presume that the software itself would harmonize all the plugs and sockets for all the features.  The same thing could be true if the software was transplanted from a physical appliance and altered to work as a VNF.  Operators tell me that there is very little truly customized VNF software out there in any form, much less open-source.

The second possibility is to adopt what might be considered a variation on the “VNF-specific VNF Manager (VNFM).”  You start with a virtual function component that provides the feature logic, and you combine it with custom stuff that harmonizes the natural plugs and sockets and connectivity expected by the function with the stuff needed by NFV.  This combination of functional component and management stub then forms the “VNF” that gets deployed.  Operators tell me that most of the VNFs they are offered use this approach, but also that only a very few open-source functions have been so modified.

The final possibility is that you define a generic lifecycle management service that talks to whatever plugs are available from the function component, and makes the necessary connections inside NFV to do deployment and lifecycle management.  I’ve proposed this approach for both the original CloudNFV project and my ExperiaSphere model, but operators tell me that they don’t see any signs of adoption by vendors so far.

All of these options for open-source virtual functions expose two very specific issue sets—deployment (the NFV Orchestrator function) and lifecycle management (VNFM).  For each issue set, current trials and tests have exposed a “most-significant-issue” challenge.

In deployment, the problem is that open-source software’s network connection expectations are quite diverse.  In some cases, the software uses one or more Ethernet ports and in others it expects to run on an IP subnet, sometimes with other components, and nearly always with the aid of things like DNS and DHCP services.  One challenge this presents is that “forwarding graphs” that show the logical flow relationship of a set of VNFs may do little or nothing in describing how the actual network connectivity would have to be set up.

In the lifecycle management case, there are two challenges.  One is to present some coherent management view of the VNF status.  In the ETSI model this is the responsibility of the VNFM, which is often integrated with the VNF, but I don’t think this is workable because the VNF may be instantiated in multiple places because of horizontal scaling.  The other challenge is getting the VNF information on its own resources.  You can’t have a tenant service element accessing real resource management data, particularly if it plans to then change variables to control behavior.

I’ve said in prior blogs that VNF deployment should be viewed as platform-as-a-service (PaaS) cloud deployment, where the platform APIs come from a combination of operating system and middleware tools deployed underneath the VNFs, and connectivity and control management tools deployed alongside.  We have never defined this space properly, which means that there is no consistent way of porting software to become a VNF and no consistent way to onboard it for use.

What’s needed here is a simple plug-and-socket diagram that defines the specific way that VNFs talk to NFV elements, underlying resources, and management systems.  The diagram has to show all of the plugs and sockets, for not only the base configuration of the VNF but also for any horizontally scaled versions, including load-balancers needed.

Open source is not the answer to this problem; like any other software it has to run inside some platform.  In fact, the lack of a platform puts the application of open-source software to VNFs at risk because it poses a significant risk in terms of resources needed to adapt the software, and in the open-source world the commercial interest in covering that risk is diminished.

Operator initiatives like the recent architecture announcements from AT&T and Verizon take a step in the right direction, but they’re not there yet.  I’d love to see these operators step up and define that VNFPaaS framework now, so we can start to think about the enormous opportunity that open-source VNFs could open for them all.