Diving Deeper into Functional Versus Structural Modeling for NFV

A couple of you dropped me an email after my last blog and expressed an interest in hearing more about what I’d described as “functional” versus “structural” orchestration in NFV and even related technologies (SDN and the cloud).  OK, here goes.

If we start with the service example I gave in yesterday’s blog, we can see that a simple VPN service might consist of two logical or functional elements—the central VPN and the access technology.  If somebody ordered a VPN, we’d give them one central VPN functional element and as many access elements as they had endpoints—branch offices likely.

An operator who offered a service like this might well want to start by defining these same functional elements.  They could define the properties of each, the range of performance (the SLA parameters), etc.  This sort of thing is technology neutral, in that the functional elements are “black boxes” that could be created by any means that allowed the required properties of the element to be met.  SDN, NFV, legacy, it doesn’t matter.

OK, now suppose the order comes in.  The “functional model” of the service as a central circle called “VPN” and ancillary circles called “broadband access” is first fleshed out with the specifics—where does the service access have to be presented, what’s the SLA, and so forth.  Once the model has become an order, we have to instantiate it—commit resources—to make the service real.

This is the point where we have to dip out of functional and into structural.  Suppose that the central VPN element is going to be created with a standard IP VPN using standard routers.  That means that the abstract VPN object has to decompose into the manipulation of network behavior via a management interface.  If that VPN element is to be created using OpenFlow and white-box switches, we have to drive a Controller to build the required paths.  If we were using software-hosted routing, we’d first have to deploy the software elements where we want them, then parameterize them as needed.  The point is that each of these tasks requires an organized set of steps and processes, and so they can be defined as a model.  That’s what I’m calling “structural” modeling and orchestration.

But we can dip even deeper here to learn something useful.  It’s very likely that the reason why we’d have multiple options for instantiating an element relate to where the service is being offered.  Perhaps VPNs confined to a single piece of an operator’s service area can be built entirely with SDN, but those that are wider in scope will either have to use classical routing or a mixture of the two technologies.  So our functional element, VPN, has to be decomposed differently depending on the location of the service users, the endpoints.  Each endpoint also has to be supported using whatever technology happens to be located there.  I think all of this is self-evident.

What may not be self-evident is that this means that we can’t author a service that combines function and structure in the same model unless we want to build a model that describes every possible twist and turn of network functionality, vendors, etc.  If we allow the “how” of deployment to get intermingled with the functional “what” we write infrastructure dependencies into our service models, meaning that we’d then have to change the models when infrastructure changed.

On the other hand, if we separated functional and structural models, the same “service” at the order level could be instantiated on whatever happened to be there as long as the transition from function to structure was made correctly.  You could add new options for any function simply by creating structural models for them and policies/rules on when to use those new models.

In my ExperiaSphere material, I’ve noted that it’s also true that the functional/structural transition takes place at a point where the service goes from being a logical grouping of features to being a specific commitment of resources.  I’ve proposed that a “service architect” builds the functional model of the service by assembling “behaviors” that are created by a “resource architect”.  So when we want to provision a VPN in a given area, we might look first at our resource model and ask what geographic structural model best contained that service area.  We’d then go to the resource model selected, dip down to find the behavior (the function) we needed, and get the structural model for it.

The TMF actually has, or had, a corresponding structure.  Services decomposed into “Customer-Facing Services” and “Resource-Facing Services”.  The former would correspond to my functional modeling and the latter to my structural modeling.  That approach, at least in terminology, appears to be phasing out which I think is unfortunate, but perhaps they could put it back.

The notion that I’ve called “binding” can now be explained.  The first “binding” that’s needed is that the service has to be bound to resource models that are selected by policy, probably largely based on geography and the topology of infrastructure overall.  This branch is in New York, so I first bind to the New York resource model.  I then seek a structural model for my access object (staying with the VPN example) within the New York model, because that’s where I record what can be done in New York.

Once I know the structural model I need, I then decompose or orchestrate it to combine the elements needed to fulfill the functional mission I started with.  I need to record what I do at this point because first I have to commission the resources as needed (deploy, connect, parameterize) and second I have to manage them on an ongoing lifecycle basis, which is hard if I don’t know where my functionality is actually being generated.

We have a number of current technologies that are well-suited to structural modeling, including OpenStack and Yang.  We have at least one (TOSCA) that I think could be used to describe both structural and functional models.   But whatever we do, it’s my view that it is absolutely crucial for the success of NFV that we keep the two models separate, and that we recognize that one model is based on what the service does, and the second on where and how it does it.