When cloud computing came along, it was clear that the process of deploying multi-component applications and connecting pieces to each other and to users was complicated. Without some form of automation, in fact, it’s likely that the deployment would be so error-prone as to threaten the stability and business case for the cloud. What developed out of this was an extension of an existing concept—DevOps—to do the heavy lifting. NFV needs the same thing, but likely more so. But how would NFV “DevOps” work?
There are two models of DevOps used in the cloud. One, the “declarative model”, defines the end-state desired and lets the software figure out how to get there. The other, the “imperative model” defines the steps to be taken. There is a general consensus that NFV needs a declarative approach, but none have been officially accepted, and most people don’t realize that NFV really has two different “models” to contend with.
You’ll remember from a prior blog that in the Virtual Infrastructure Manager (VIM) is (IMHO) responsible for converting an intent model of a deployment (or piece of one) into the steps needed to actually commit infrastructure. That means that the VIM would likely employ an NFV-suitable DevOps model. However, if services are (as I’ve asserted they must be) composed of multiple pieces that are independently deployed, then there has to be another model that describes service composition, as opposed to component deployment.
This second model level is absolutely critical to NFV success, because it’s this level of model that insures that the infrastructure and even VNF components of a service are truly open and interoperable. Unlike the lower-level model inside the VIM, the service-level model contains logical elements of a service. Unlike the lower-level model that’s resource-specific, the service-level model is function-specific. Finally, the service-level model is what the NFV MANO function should work on, launching the whole of the NFV process.
My view of the service-level process is based on a pair of operations people or teams—the service and resource architects. Service architects decide what features will be required for a service and lay out the way those features have to look, to customers and to each other. Resource architects decide how features are deployed and connected in detail. It’s easiest to visualize this in terms of service catalogs.
A retail service goes in the “finished goods” section of a catalog. What it looks like is simple; it’s a model of the functional makeup of the service, connected through a set of policies to a set of resource models (minimum of one per service function) that can fulfill it. When you order a service (directly through a portal or via a customer service rep) you extract the associated service model and send it to a deployment process (which some say is inside the OSS/BSS, some inside MANO, and some external to both). That process uses the parameters of the service to first select implementation options, and second to drive the VIMs associated with the collective resources available to build each function.
This implies that a catalog also contains what could be called a “blueprint” section and a “parts for production” section. The former would be the service models, both as low-level functions and as collections of functions. The latter would be the resource models. Service architects would build service by assembling the service model components and binding them to resource models. The result would then go into the “finished goods” section, from which it could be ordered.
The VIM-level DevOps process would clearly look like a simple cloud deployment, and so it would be possible to use either declarative or imperative modeling for it. I think that both will likely end up being used, and that’s fine as long as we remember a rule from the past blog: You cannot expose deployment details in the VIM interface to MANO or you lose the free substitution of implementations of functions. Thus, you can have an intent model that’s fulfilled under the covers by a script, but the VIM itself has to present a model (declarative) as the interface.
That means to me that the service model has to be declarative too. Not only that, it has to be declarative in a function sense, not in the sense of describing how the deployment happens below. It’s fine to say, in a service model, that “three access functions connect to a common VPN function”, but how any of these functions are implemented must be opaque here. If it’s not, you don’t have a general, open, NFV service.
That may have an impact on the modeling language selected. My own work on NFV was based on models defined using XML, but I also think that TOSCA would be an excellent modeling option. I’m far less enthusiastic about YANG because I think that modeling approach is more suitable for declaring deployment structures, meaning being a declarative model within the VIM. It could probably be made to work, providing that the mission of service modeling wasn’t compromised.
The boundary between service and resource modeling isn’t rigid. The bottom of resource modeling is the intent models representing the VIMs, but it’s possible that modeling “above” that could be used to set resource-related policies and select VIMs to keep those kinds of decisions out of the service models. In my ExperiaSphere project, I proposed a “service” and “resource” domain, but a common modeling approach in both. I still think that’s the best approach, but it’s also possible to extend VIM modeling upward to the service/resource boundary. I think the benefits of extending VIM modeling upward like that are limited because of the need to support all the possible VIMs representing all the possible flavors of infrastructure. You couldn’t predict what VIM modeling would be used in the collection of VIMs you needed.
I think the service/resource domain concept is a natural one for both technical and operational reasons, and it might also form the natural boundary between OSS/BSS processes and resource/network management and “NFV” processes. If I were an OSS/BSS player or a limited-scope provider of NFV solutions (an NFVI player, for example), I’d focus my attention on this boundary and on standardizing the intent model there. If I were a full-service player, I’d tout my ability to model in a common way on both sides of the boundary as a differentiator. You could then integrate virtually any real NFV implementation or silo with operations systems of any flavor.
Service/resource modeling is also likely essential for effective management of NFV-based services or in particular services with a few NFV elements. The “service” layer can represent functional things that users expect to see and manage, and the service/resource connection is where the craft view of management intersects with the customer/CSR view.
Declarative modeling is the right approach for NFV, I think. Not only does it map nicely to the intent model concept that’s critical in preserving abstractions that form the basis for virtualization, it’s naturally implementation-neutral. It is very difficult to write a script (an imperative DevOps tool) that isn’t dependent on the implementation. A good declarative modeling strategy at the service and resource level is where NFV should start, and where it’s going to have to end up if it’s ever to be fully realized.
That’s because NFV benefits in operations efficiency and service agility are critical, and these benefits depend on management. Management is a whole new can of worms, of course, but it can be fixed if we presume a proper modeling of services and resources. That will be my next topic in this series.