What We Can Learn From the NFV ISG Modeling Symposium’s Presentations

The ETSI NFV ISG has made a public release of the presentations made in the NFV modeling and information model workshop held recently and hosted by CableLabs.  I’ve referenced some aspects of this meeting in past blogs, but the release of the presentation material supports a look at the way the modeling issue is developing, and perhaps supports some conclusions on just what various standards development organizations (SDOs) are proposing and what the best approach might be.

Happy convergence doesn’t seem to be one of the options.  When you look at the material it’s hard to miss the fact that all of the bodies represented have taken slightly or significantly different approaches to “modeling” of service information.  Where there are explicit connections among the models (the TMF, ITU, MEF, and ONF models for example), the symbiosis seems to be an opportunistic way of addressing all of the issues of services through referencing someone else rather than by hammering out a cooperative approach.   The general picture we have is one of multiple approaches that in some cases deal with different aspects of the overall service modeling problems, and in other ways deal with the same aspects but differently.

Top-down service-oriented modeling also seems a casualty in all the SDO proposals.  I think all are essentially bottom-up in their approach, and that most if not all seem linked with a “componentized” view of service automation, one where there are a small number of software processes that act on model data to produce a result.  This is what the ETSI architecture reflects; we have MANO and VNFM and VIMs and so forth.  Only the OASIS TOSCA presentation seems to break free of these limitations; more on that later.

We’re not replete with broad-based models either.  Many of the modeling approaches are quite narrow; they focus on the specific application of vCPE and service chaining rather than on the broader issue of making a business case for an NFV-driven transformation.  Some of this bias is justified; service chaining presents a different model of component connectivity than we see in applications in the cloud (as I said in a prior blog).  However, there is obviously a risk in focusing on a single application for NFV, particularly one that could well end up being supported by edge hosting of virtual functions, which would make some of the issues moot.

What this adds up to is that we have not only the problem of synchronizing the SDO modeling equivalent of Franklin’s 13 clocks, we have our clocks in different time zones.  If a harmonious model for NFV services, or even for service modeling at a higher level, is the goal then I don’t think there’s much chance of achieving it by rationalizing the viewpoints that were represented by the presentations.  Many of the bodies have long histories with their own approach, and even more have specific limited missions that only overlap the broad service-modeling problem here and there rather than resolving it in a broad sense.  My conclusion is that we are not going to have a unified model of services, period, so the question is what we should or could have.

The core issue that the material represents in terms of the quality of a model rather than the probability of harmonizing on one, is the role of the model itself.  As I said, most of the material presents the modeling as an input to a specific process set.  This means that “operations”, “network management”, “data center management”, and “NFV orchestration and management” are all best visualized as functional boxes with relationships/interfaces to each other, defined by models.  The question is whether this approach is ideal.

A “service” in a modernizing network is a complex set of interrelated features, hosted on an exploding set of resources.  The functional building blocks of service processes we have today were built around device-centric practices of the past.  In a virtual world, we could certainly sustain these practices by building “virtual devices” and then mapping them into our current management and operations models.  That, in my view, is what the great majority of the presentations made in the CableLabs session propose to do.  The models are simply ways of communicating among those processes.  This approach also tends to reduce the impact of SDN or NFV on surrounding management and operations systems, particularly the latter, and that simplification is helpful to vendors who don’t want to address that aspect of infrastructure modernization.  However, there are significant risks associated with managing what are now non-existent devices.

TOSCA, in my view, and my own CloudNFV and ExperiaSphere approaches, would either permit or mandate a different model-to-process relationship, where processes are driven by the models and the models act as a conduit to link events to the processes.  The OASIS TOSCA presentation was the only one that I think addressed the specific point of lifecycle management and thus could argue it would allow automation of service operations and management processes.  The model-steering of events was the innovation of the TMF’s NGOSS Contract and GB942 approach, but interestingly the TMF itself didn’t present any reference to that approach in their submission to the sessions CableLabs hosted.  That’s a shame, because with a state/event approach all operations and management processes are converted to microservices invoked by events and mediated by service element states.  Thus, we break the static boundaries of operations and management “applications.”  That lets us virtualize management as easily as we virtualize functions.

What makes this notion different is that operations/management microservices can then be invoked for any service/resource event at any point, directly from the model.  What you model, you manage.  A service model then defines everything associated with the deployment and management of a service; there is no need for other descriptors.  This isn’t a philosophical point.  The fundamental question is whether virtualization, which builds services from “functions” that are then assigned to resources, can be managed without managing the functions explicitly.  The virtual device model combines functions and their resources, but there are obvious limitations to that approach given that the implementation of a function would look nothing like the device from which the function was derived.  How do you reflect that in a virtual-device approach?

I think that it would be possible to nest different models below service-layer models, and for sure it would be possible to use network-centric modeling like YANG/NETCONF “below” the OpenStack or VIM layer in NFV.  Where systems of devices that already define their own rules of cooperation exist (as they do in all legacy networks) there’s nothing wrong with reflecting those rules in a specialized way.  I don’t see that point called out in the material explicitly, though it does seem an outcome you could draw from some of the hierarchies of standards and models that were presented.

The first milestone defined by the “CableLabs Concord”, which is just having other SDOs get back to the ETSI ISG with proposals, isn’t until March.  Significant progress, as I noted in an earlier blog, is expected by the end of this year, but that doesn’t mean that any resolution will be reached then, or even that any is possible.  Frankly, I don’t think it is.

What all this says to me is that we have a dozen chefs making dishes without a common recipe or even a common understanding of the ingredients to be used.  There are three possibilities that I see for harmonization.  One is to simply accept that TOSCA is the only model that can really accommodate all the challenges of service modeling, including and especially the event-to-process steering.  TOSCA, after all, is rooted in the cloud, which is where virtualization, SDN, and NFV all need to be rooted.  The second is to let vendors, each likely championing their own approach loosely based on some SDO offering or another, fight it out and pick the best of the remaining players after markets have winnowed things down.

What about number three?  Well, that would be vendor sponsorship of TOSCA.  A number of vendors have made a commitment to evolve to a TOSCA approach, including Ciena and HPE whose commitments to TOSCA are most public.  I cited a university-sponsored project to use TOSCA to define network services in my ExperiaSphere material, and there are a number of open-source TOSCA tools available now.  However, I think OASIS’ presentation softballed their benefits a bit, and I’m not sure whether—absent market recognition of TOSCA’s value—any vendor would bet a lot of positioning collateral on TOSCA.  They should, and if they did I think they could take the lead in the service-modeling race, and in the race to make the SDN and NFV business case.