Resource Modeling and Open Infrastructure in SDN/NFV

I said in my last blog that starting from the top of in viewing next-gen services and infrastructure led to a refinement of operator-sponsored models for SDN/NFV deployment.  In that blog, I took the process down to what I said was the critical point—the “Network Function” that replaces the TMF concept of a “Resource-Facing Service” and forms the boundary between OSS/BSS and service management and the down-in-the-dirt management of infrastructure.  I said then I’d dig down below the NF level in a later blog, and this is it.

Recouping the critical lead-in to this discussion, an NF in my approach is an intent model that describes a function that can be composed into a service.  The NF has to be put into a commercial wrapping, which could be done most easily by assigning it to a Customer-Facing Service (CFS) and giving it commercial attributes.  But you still have to realize the abstraction of an NF somehow, and that’s today’s discussion.

It would be lovely if we could simply jump off of the NF directly into some convenient and simple API that’s the old saw—the “DoJob” function.  In some cases, that might even work, providing that the scope of service and the capabilities of the management layers of infrastructure combined to mean that one API would cover all the features needed everywhere.  In today’s world, that’s unlikely.

An NF would logically have to decompose first into what I’ll call domains, which represent collections of infrastructure that obey a common management system.  If I want to sell a VPN or VLAN and add virtual-CPE features, I’d need to have a subdivision representing the various management frameworks through which I needed to operate, likely based on the location of the endpoints.  Thus, a VPN NF would decompose into a series of “administrative NFs” that represented the various collections of endpoints that each could serve.  I’d pick several of these during the primary NF decomposition, based on just where I needed to project service access.

Inside my primary administrative domains, I’d probably have further subdivisions that represented any different technical routes to the same functional capability.  Here I might command routers, and there I might use SDN, and over on the far right (or left, depending on your preferred political leaning!) I might have to deploy virtual elements using NFV.  If I follow this kind of “domain dissection” process downward, each branch would eventually lead me to something that actually controls or deploys resources.

I’ve argued in the past that it could often be logical to assume that a given operator would build up from the resources to expose a set of cooperative functional behaviors.  These are what are available for combination into NFs, either as alternative elements or cooperative ones.  An operator could use this bottom-up exposure mapping to decide just what network or hosted features they wanted service architects to be able to use.  I’ve described this bottom-up behavioral exposure as being the function of a “resource architect”.

This is similar to an aspect of cloud deployment.  In modern cloud tools (DevOps) we have applications at the top, which deploy onto virtual resources in the middle.  These in turn are deployed onto real resources at the bottom.  The intermediary abstraction, a VM or container, means that the top processes and the bottom ones know of each other only insofar as both support a common binding abstraction.

From a modeling perspective, I could visualize the central or binding NF concept as being the border between resource-independent and resource-specific behavior.  At the top, I assemble functions.  At the bottom I coerce cooperative behavior from resource pools.  The top focuses on what features are and do, and the bottom on how they’re actually created.  This difference could be reflected in a number of ways, but one is to shift from a functional or abstract view of a service model to a topological view, one that reflects the location and nature of the real stuff.  You could see this in terms of a modeling language shift, from something like TOSCA to something like YANG.

Or not, and let me illustrate why.  The modeling that’s provided at a given layer depends on the role the model has to play.  I could decompose my binding NF into a YANG model of domain connectivity, and that would be essential if I had to build a multi-domain resource commitment by commanding stuff per-domain and then linking what I’ve built.  But if I had a supermanager that saw all the domains and somehow mysteriously accommodated different vendors and technologies inside them all, I could then simply tell that supermanager to build the function.  In the first case, I decompose into YANG, and in the second I simply invoke a supermanager API.

Likely these supermanagers, and in fact many lower-level management systems that are eventually the target of a hierarchical decomposition, will have to know about topology, but if that’s the case then the need is fulfilled inside the intent model and it’s all invisible to the outside world.

This difference highlights a key question on openness.  At the NF level, everything that realizes the function is equivalent, so the NF is open.  However, if I have a huge complex god-box below, whose capabilities include only the support for my own products, I’ve created at least a good start at a de facto lock-in for buyers.  There is “titular openness” but perhaps not openness in a true and practical level.

What this suggests to me is that for open infrastructure and implementation of NFV and SDN to combine, we need to demand multi-layer decomposition below my critical functional-model (NF) level.  Today, in NFV, low-level deployments via the virtual infrastructure manager function aren’t required to be modeled through higher layers.  That encourages subducting my administrative and geographic domain functions into the VIM, which means that vendors who have product-specific VIMs can then build silo implementations.

Some operators have suggested that something like OpenDaylight, with its universal architecture that connects pretty much anything using pretty much anything, is a solution.  It’s true that a standard supermanager would open things up, but I wonder whether a monolithic implementation is the best approach; why not model it?  In any event, OpenDaylight manages connectivity not function deployment, so you can’t use it for all the supermanager functions.  Operator architectures already reflect that limitation in how they place SDN control and NFV deployment relative to their own models.

I think that the need to generalize supermanager functions also argues against saying that you have to transition to a topology-modeled approach like YANG below the NF.  You can model all of this stuff with TOSCA, though YANG might be more straightforward as a means of describing domain and nodal connectivity, and thus of framing routes.  A combination of the two seems a reasonable approach, but only if somebody steps up to propose a good, open, model for doing it.  I’m still waiting to hear one.