The Latest in the New-Service Modeling Game

Modeling is critical for both SDN and NFV, and I’ve blogged on the topic a number of times.  In particular, I’ve been focusing on “intent models” as a general topic and on the OASIS TOSCA (Topology and Orchestration Specification for Cloud Applications) in particular.  The two, as we’ll see, have a relationship.

A recent set of comments on LinkedIn opened a discussion of models that I think is important enough to share here.  Chris Lauwers of Ubicity offered his view of the modeling world: “TOSCA models are ‘declarative’ in that they describe WHAT it is you’re trying to provision (as opposed to ‘prescriptive’ models that describe HOW you’re going to get there). ONF uses the term ‘intent’ to describe the same concept, and yet others refer to these types of models as ‘desired-state’ models. While there are subtle nuances between these terms, they all effectively describe the same idea. As far as I know, TOSCA is currently the only standard for declarative models which is why it is gaining so much traction. One of the main benefits of TOSCA is that it provides abstraction, which reduces complexity tremendously for the service designer. TOSCA also use the same declarative approach to model services/applications as well as infrastructure, which eliminates the types of artificial boundaries one often sees in various industry ‘reference models’.”

This is great stuff, great insight.  The modeling distinction between prescriptive and declarative is an attribute of the first DevOps tools, and today’s most popular DevOps tools (Chef, which is prescriptive versus Puppet, which is declarative) still reflect the division.  When you apply the terms to SDN and NFV, there are factors other than DevOps roots that come into play, and these explain the industry shift (IMHO) toward intent modeling.  They also (again, IMHO) illustrate the balance that network/service modeling has to strike.

A prescriptive model, in the context of SDN or NFV, would describe how you do something like provision.  It’s easy to create one of these in the sense that the prescription is a reflection of manual processes or steps that would be needed.  Any time you create a new service or element, you can create a prescriptive model by replicating those manual steps.

Declarative network/service models, in contrast, reflect the functional goal—Chris’s “What?” element.  I’m trying to build a VPN—that’s a declaration.  Because the intent is what’s modeled, this is now often called “intent modeling”.  With intent modeling you can create a model without even knowing how you build it—the model represents a true abstraction.  But to use the model, you have to create a recipe for the instantiation.

Prescription equals recipe.  Declarative equals picture-of-the-dish.  The value proposition here is related to this point.  If you were planning a banquet you might like to know what you’re serving and how to arrange the courses before you go to the trouble of working out how to make each dish.  You might also like to be able to visualize two or three versions of a dish, perhaps one that’s gluten-free and another lactose-free, but set up the menu and presentation and change the execution to reflect a diner’s preference.

This is the reason why I think intent modeling and not prescriptive modeling is the right approach to SDN and NFV, and in fact for service modeling overall.  You should be able to manipulate functional abstractions for as long as possible, and resort to the details only where you commit or manage resources.  You should also be able to represent any element of a service (including access) and also represent what I’ll call “resource behaviors” like VPN or VPLS, then combine them in an easy function-feature-assembly way to create something to sell.

What’s messed up a lot of vendors/developers on the intent-declarative approach is the notion that the dissection of intent into deployment has to be authored into the software.  A widget object, in other words, is decomposed because the software knows what a widget is.  That’s clearly impractical because it forecloses future service creation without software development.  But declarative models don’t have to be decomposed this way, you can do the decomposition in terms of the model itself.  The recipe for instantiating a given model, then, is included in the model itself.

This can be carried even further.  A given intent-model represents a piece of a service, right?  It can be deployed, scaled, decommissioned, managed as a unit, right?  OK, then, it has lifecycle processes of its own.  Those processes can be linked to service events within the intent model, so you have a recipe for deployment, sure, but also a recipe for everything else in the lifecycle.  If we them presume that objects adjacent to a given object can receive and generate service events to/from it, we now can synchronize lifecycle processes through the whole structure, across all the intent-modeled elements.

You can even express the framework in which VNFs run as an intent model.  Most network software thinks it’s running in either an IP subnet or simply on a LAN.  The portals to the outside world are no different from VPN service points in terms of specifications.  Your management interfaces on the VNFs can be connected to management ports in this model, so essentially you’re composing the configuration/connection of elements as part of a model, meaning automatically.

Everything related to an element of a service, meaning every intent element model in a service model (“VPN”, “EthernetAccess”, etc.) can be represented in the model itself, as a lifecycle process.  You can do ordering, pass parameters, instantiate things, connect things, spawn sub-models, federate services to other implementations across administrative boundaries, whatever you like.  That includes references to outside processes like OSS/BSS/NMS because, in a data model recipe, everything is outside.  The model completely defines how it’s supposed to be processed, which means that you can extend the modeling simply by building a new model with new references.

This is what makes intent-model integration so easy.  Instead of having to write complex management elements specialized to a virtual network function, and then integrating them with the VNFs when you deploy, you simply define a handler that’s designed to connect to the VNF’s own management interfaces and reference that in the model.  If you have a dozen different “firewalls” you can have one intent-model-object to represent the high-level classification, and that decomposes into the specific implementation needed.

Another interesting thing about intent modeling is that it makes it easier to mix implementations.  Intent models discourage tight coupling between lower and higher levels in a structure, and that in turn means that a model in one implementation can be adapted for use in another.  You could in theory cross silo boundaries more easily, and this could be important for both SDN and NFV because neither standard has developed a full operations vision, which means their benefit cases are likely to be more specialized.  That promotes service-specific deployment and silos.

Even among the vendors who can make a broad operations business case that couples downward to the infrastructure, talking about intent models is rare and detailed explanations of strategy are largely absent.  I hope we see more modeling talk, because we’re getting to the point where the model is the message that matters.