Service Lifecycle Modeling: More than Just Intent

I blog about a lot of things, but the topic that seems to generate the most interest is service lifecycle automation.  The centerpiece of almost every approach is a model, a structure that represents the service as a collection of components.  The industry overall has tended to look at modeling as a conflict of modeling languages; are you a YANG person, a TOSCA person, a TMF SID person?  We now have the notion of “intent modeling”, which some see as the super-answer, and there are modeling approaches that could be adopted from the software/cloud DevOps space.  How do you wade through all of this stuff?

From the top, of course.  Service lifecycle automation must be framed on a paramount principle, which is that “automation” means direct software handling of service events via some mechanism that associates events with actions based on the goal state of each element and the service overall.  The notion of a “model” arises because it’s convenient to represent the elements of a service in a model, and define goal states and event-to-process relationships based on that.

The problem with this definition as a single modeling reference is that term “service elements”.  A service is potentially a collection of thousands of elements.  Many of the elements are effectively systems of lower-level elements (like a router network), or complex elements like hosted virtual functions that have one logical function and multiple physical components.  The structural reality of networks generates three very specific problems.

Problem number one is what are you modeling?  It is possible to model a service by modeling the specific elements and their relationships within the service itself.  Think of this sort of model as a diagram of the actual service components.  The problem this has posed is that the model doesn’t represent the sequencing of steps that may be needed to deploy or redeploy, and it’s more difficult to use different modeling languages if some pieces of the process (setup of traditional switches/routers, for example) already have their own effective modeling approaches.  This has tended to emphasize the notion of modeling a service as a hierarchy, meaning you are modeling the process of lifecycle management not the physical elements.

The second problem is simple scale.  If we imagine a model as a single structure that represents an entire service, it’s clear in an instant that there’s way too much going on.  Remember those thousands of elements?  You can imagine that trying to build a complete model of a large service, as a single flat structure, would be outlandishly difficult.  The issue of scale has contributed to the shift from modeling the physical service to modeling the deployment/redeployment steps.

Problem three is the problem of abstraction.  Two different ways of doing the same thing should look the same from the outside.  If they don’t, then making a change to how some little piece of a model is implemented could mean you have to change the whole model.  Intent modeling has come to be a watchword, and one of its useful properties is that it can collect different implementations of the same functionality under a common model, and can support hierarchical nesting of model elements, an essential property when you’re modeling steps or relationships not the real structure.

Problem four is suitability and leveraging.  We have many software tools already available to deploy hosted functions, connect things, set up VPNs, and so forth.  Each of these tools has proved itself in the arena of the real market, they are suitable to their missions.  They are probably not suitable for other missions; you wouldn’t expect a VPN tool to deploy a virtual function.  You want to leverage stuff where good stuff is available, meaning you may have to adopt multiple approaches depending on just what you’re modeling.  I think that one of the perhaps-fatal shortcomings of SDN and NFV work to date is the failure to exploit things that were already developed for the cloud.  That can be traced to the fact that we have multiple modeling approaches to describe those cloud-things, and picking one would invalidate the others.

As I noted above, it may well have been the recognition of these points that promoted the concept of intent models.  An intent model is an abstraction that asserts specific external properties related to its functionality, and hides how they’re implemented.  There’s no question that intent models, if properly implemented, offer a major advance in the implementation of service lifecycle automation, but the “properly implemented” qualifier here is important, because they don’t do it all.

Back in the old IPsphere Forum days, meaning around 2008, we had a working-group session in northern NJ to explore how IPsphere dealt with “orchestration”.  The concept at the time was based on a pure hierarchical model, meaning that “service” decomposed into multiple “subservices”, each of which was presumed to be orchestrated through its lifecycle steps in synchrony with the “service” itself.  Send an “Activate” to the service and it repeats that event to all its subservices, in short.  We see this approach even today.

One of the topics of that meeting was a presentation I made, called “meticulous orchestration”.  The point of the paper was that it was possible that the subordinate elements of a given model (an “intent model” in today’s terminology) would have to be orchestrated in a specific order and that the lifecycle phases of the subordinates might not just mimic those of the superior. (Kevin Dillon was the Chairman of the IPSF at the time, hope he remembers this discussion!).

The important thing about this concept, from the perspective of modeling, is that it demonstrated that you might need to have a model element that had no service-level function at all, but rather simply orchestrated the stuff it represented.  It introduced something I called in a prior blog “representational intent.”  If you are going to have to deploy models, if the models have to be intent-based and so contain multiple implementations at a given level, why not consider two thinking in two levels—the model domain and the service domain?

In traditional hierarchical modeling, you need a model element for every nexus, meaning the end of every fork and every forking point.  The purpose of that model element is to represent the collective structure below, allowing it to be an “intent model” with a structured interior that will vary depending on the specific nature of the service and the specific resources available at points where the service has to be delivered or host functionality.  It ensures that when a recovery process for a single service element is undertaken and fails to complete, the recovery of that process at a higher level is coordinated with the rest of the service.

Suppose that one virtual function in a chain has a hosting failure and the intent model representing it (“FirewallVNF” for example) cannot recover locally, meaning that the place where the function was formerly supported can no longer be used.  Yes, I can spin up the function in another data center, but if I do that, will the connection that links it to the rest of the chain not be broken?  The function itself doesn’t know that connection, but the higher-level element that deployed the now-broken function does.  Not only that, it’s possible that the redeployment of the function can’t be done in the same way in the new place because of a difference in technology.  Perhaps I now need a FirewallVNF implementation that matches the platform of a new data center.  Certainly the old element can’t address that; it was designed to run in the original place.

You see how this has to work.  The model has to provide not only elements to represent real service components, but also elements that represent the necessary staging of deployment and redeployment tasks.  Each level of such a structure models context, dependency.

There are other approaches to modeling a service, of course, but the hierarchical approach that defines structure through successive decomposition is probably the most understood and widely accepted.  But even that popular model has to be examined in light of the wide-ranging missions that transformation would be expected to involve, to be sure that we were doing the right thing.

You could fairly say that a good modeling approach is a necessary condition for service lifecycle automation, because without one it’s impractical or even impossible to describe the service in a way that software can be made to manage.  Given that, the first step in lifecycle automation debates should be to examine the modeling mechanism to ensure it can describe every service structure that we are likely to deploy.

There are many modeling languages, as I said at the opening.  There may be many modeling approaches.  We can surely use different languages, even different approaches, at various places in a service model, but somehow we have to have a service model, something that describes everything, deploys everything, and sustains everything.  I wonder if we’re taking this as seriously as we must.