In my recent group of blogs I’ve covered the main issues with openness in NFV. What I’d like to do in closing this series is describe what could be an effective and open NFV model. There are probably a lot of ways to skin the NFV cat, but this one has been proven out in a couple of projects and so I’m pretty confident it would work. If you have other approaches in mind, I’d suggest you test them against the behavior of the model I’m suggesting, and to make that easy I’ll define some basic principles to build on.
The first principle is top-down design. My approach to NFV has always been top-down, because I’m a software architect by background and that’s how you’d normally develop software. The mandate is to start by defining the benefits you need, then define an architecture that addresses those benefits effectively, and finally an implementation that meets the requirements of the architecture.
NFV needs to cost less, make more, or both. Even a simple exploration of those goals makes it clear that the primary requirement is service automation, the second principle. NFV has to allow services to be built and run with zero touch (which is what the TMF “ZOOM” acronym means, by the way).
Service automation means turning a service description into a running service. In modern thinking, a service description would be called an intent model, because it defines a “black box” of components by what they’re intended to do not by the contents of the box. NFV, then, should have two high-level pieces—a service modeling framework and a software component that turns the model into a running service. That’s the third principle.
If you look at the notion of intent models for services you see a natural hierarchy. There is a high-level model that could be called RetailService. The inside of this model would be another set of models to define the functions that make up the RetailService. For example, a VPN service would have a single “VPNService” model, and a set of “ServiceAccess” models for each access point. This means that a service is a nested set of intent models that describe all the functions and how they relate to each other.
At some point, you decompose an intent model down to the level where what’s inside are specific resources, not other models. In NFV, this is the point where your intent model references a Virtual Infrastructure Manager (VIM), and that VIM (more generally, an IM or Infrastructure Manager since all the infrastructure isn’t virtualized) is responsible for deployment.
Each intent model is described by a functional capability (VPN, access, etc.), by port/trunk connections, and by a service-level agreement or operating parameter set. The set of all these parameters, these SLAs, define the state of the service overall. My proposal has always been to consider this combined data as what I called a “MIB-Set”, and further that it be stored in a repository that isolates the real resource MIBs and allows quick formulation of arbitrary management views. That’s the fourth principle.
Keeping that repository up-to-date is a system-wide function, starting with the polling of resource MIBs and VNF MIBs that provide raw status information. This status information is, you may recall, processed within the VIM to create a derived MIB for the intent model(s) the VIM represents. It follows, IMHO, that when you build an intent model of intent models, you would define the SLA/parameter variables for the new model by referencing variables in the subordinate ones. Each higher-level model has a derived management state. The derivations would have to be refreshed, either on demand when something changes below, or periodically.
Now, we can see how management and operations could work. That starts by saying that each intent model is a finite-state machine, meaning that it has an operating state that is determined by the events that it recognizes. It maintains its own state, and it can generate (via its SLA/parameter changes) events upward or downward to its superior or subordinate models. This synchronizes all of the models in a lifecycle sense.
Anything that has to “see” management data will see the MIB-Set it’s a part of, and nothing else. In fact, it’s my view that everything has to see only the SLA/parameter variables of its subordinate models. The job of each model is to sustain its own state based on the state of the stuff below.
In this approach, a management system can read the MIB-Set and see what’s happening for the service at any level where it has the privilege to access. Furthermore, any NFV process, NMS process, or OSS/BSS process can be associated with a given event in any recognized operating state of any intent model, and it would be run when that combination occurred.
With this approach there is no such thing as a “VNFM” as a specific software element. We have a set of state/event tables and processes associated with the linkage. The tables determine what gets run and the processes are generic not to why they are being run (as a VNFM would be) but rather to what they do. If you scale out a VNF, for example, it doesn’t matter why you’re doing it because that was covered in the state/event table.
Service automation in this model is simply a matter of defining the “things to be done” as a set of “microservices” (to use the modern term) and then associating them with a context through state/event table definitions. Because what happens in response to events is the essence of operations management, we’re automating operations at any level.
This circles back to the top, the benefits. It proves that while the functional notion of how NFV has to work can be extracted from the ISG’s work, it’s unwise to extract the implementation in an explicit way. Software designers didn’t write the documents, but software is what has to emerge from an NFV implementation process. It also proves that you can make NFV work, and make it realize all the benefits hoped for it.
As we move into 2016 we enter a critical period for NFV. Without a high-level framework like the one I’ve described here, we can’t unify NFV trials and deployments into a new model of infrastructure. We can’t prevent silos and loss of efficiency, or lock-ins. We can’t really even test pieces of NFV because we have no useful standard to test against. What difference does it make if a piston works great if it won’t fit in the car?
It would have been better for everyone had we developed NFV from the top down and addressed benefits realization the way it should be addressed in a software project. That didn’t happen, but while the NFV ISG has wandered too far over into defining implementation and away from its mission of providing a functional spec, we can still extract the outlines of that functional spec from the work. We can then apply top-down principles to build the implementation. Some vendors (the six I’ve named in prior blogs) have done that, and I hope that more will follow in 2016. I also hope that the focus of NFV shifts to the business case, because while I believe firmly that we can make a business case for NFV, I know that we’ve not done that yet.