Of Networks, Management Scope, Modeling, and Automation

Service lifecycle automation is absolutely critical to operator transformation plant, but frankly it’s in a bit of a disorderly state.  Early on, we presumed that services were built by sending an order to a monolithic system that processed the order and deployed the necessary assets.  This sort-of-worked for deployment, but it didn’t handle service lifecycles at all, much less automate them.  A better way is needed.

One trend I see emerging in software lifecycle automation is what I’ll call representational intent.  The concept, as applied to networks, dates back more than a decade to the IPsphere Forum, where “services” were divided into “elements”, and the implementation of elements was based on defining a software agent that represented the element.  Manipulate the agent and you manipulated the element.  The importance of the concept can be seen in part through its history, and in part by hypothesizing its future.

The “representative” notion here is important because services and service elements are created through coerced cooperative behaviors, and while the elements are individually aware of what they’re cooperating to do, the awareness typically stops at an element boundary.  An access network knows access, but it doesn’t know it’s a part of a VPN.  There has to be some mechanism to introduce service-wide awareness into the picture, and if the elements themselves aren’t aware of services then we need to create representations of them that can be made aware.

This all leads to the second term, the “intent”.  An intent model is inherently representational, it abstracts a variety of implementations that are functionally equivalent into a single opaque construct.  Manipulating the intent model, like manipulating any representational model, manipulates whatever is underneath.  Structuring intent models structures the elements the models represent and adds that service-awareness that’s essential for effective lifecycle automation.

The seminal IPsphere was absorbed by the TMF, and whether there was a connection between the absorption and the notion of Service Delivery Frameworks (SDF) and NGOSS Contract or not, these two concepts picked up the torch within the TMF.

SDF was explicitly a representational intent approach.  A service was composed from elements represented by an SDF model, and the model enveloped not only the elements but the management strategy.  An SDF element might be totally self-managed, committing to an SLA and operating within itself to meet it.  That’s what most would say is the current thinking of an intent-modeled domain; it fixes what it can and passes everything else off to the higher level.  Where SDF broke from strict “intent-ness” is that other models of management, where the “intent model” exposed a management interface to be used from the outside rather than self-remediating, were also exposed.

There were a lot of concerns about the SDF approach from the operator side.  I recall an email I got from a big Tier One, saying “Tom, you have to understand that we’re 100% behind implementable standards, but we’re not sure this is one.”  I undertook, with operator backing, the original ExperiaSphere project, which was a Java implementation of a representational intent approach.

In this first-generation ExperiaSphere, the intent model representations were created by a software implementation, meaning that the service context was explicitly programmed into a “service factory” that emitted an order template.  That template, when resubmitted to the factory, filled the order and sustained the service.  I presented the results to the TMF team.

Meanwhile the NGOSS (Next-Generation OSS) project was approaching things from a slightly different angle.  SDF and (first-generation) ExperiaSphere both represented elements of a service with parallel-plane software processes.  The NGOSS Contract approach represented them instead as a data model.  Lifecycle processes are of course tickled along their path by events, and what NGOSS contract proposed was to define, in the service contract, the linkages between a state/event representation of the behavior of a service/element and the processes that handle the events.

In parallel with the TMF work, the enterprise space was focusing on automating application deployment, and the initial thrust was to have the development team create a set of deployment instructions that could be handed off to operations, hence the term “DevOps”.  DevOps emerged with two broad models—the “imperative” model that enhanced the notion of “scripting” to enter commands into systems management, and the “declarative” that defined an end-state model that was then used to drive deployment to match that state.  DevOps originally deployed, but has been enhanced to add the notion of “events”, which could be used to add at least some lifecycle management.  I won’t go into DevOps here beyond saying that the declarative approach is evolving much like intent models, and that any lifecycle manager could invoke DevOps at the bottom to actually control resources.

The TMF model also spawned two activities, the CloudNFV initiative in the ETSI NFV ISG, and my second-generation ExperiaSphere project.  Second-generation ExperiaSphere expanded on the NGOSS Contract notion, framing an explicit data-model architecture to define services and manage the service lifecycle.  CloudNFV took a different path because the core logic was provided using software developed by EnterpriseWeb, and that brought the capability of using very dynamic object modeling, not of a deployment but of the service itself. From the service and dependencies of the elements, a dynamic model is created.

The CloudNFV model, which has been greatly enhanced in service lifecycle support, has been the focus of a number of TMF Catalysts, where it won awards.  The ten-thousand-foot summary of the idea is that service elements are represented by objects, each of which is onboarded and described in terms of its dependencies—interfaces of some sort, for example.  Services are created by collecting the objects in the structure of the service itself, not in the structure of a parallel model.  The objects are their own model, and in parallel they’re a kind of programming language that gets executed when a lifecycle event occurs.

You can see, looking over all of this, that we’re evolving our notion of representing services.  There are four pathways out there.  First, you can have parameters that describe things, processed as a transaction.  This is the traditional OSS/BSS model, and there’s a good deal of that flavor still in SDN and NFV specifications.  Second, you can have a set of intent models that are “authored” into a service through programming or through a GUI.  Many of the current implementations of service lifecycle management fall into this category.  Third, you can have a data modeling architecture that defines a service deployment as a set of hierarchical intent models, and finally you can have a service that’s defined by its natural “real-world” structure, without external modeling or assembly.

With the exception of the first of these approaches, everything is a form of representational intent, which is why I opened with the concept.  The debate in the market, in implicit form only at this point since we don’t even have a broad understanding that there are options available, is how to represent the representational intent.  We can program or author it, we can data-model it, or we can let the natural assembly of service elements define their own model.  Which is best?

The only long-term answer to that question is going to be given by experience, which we’ve had precious little of so far.  To get it, we need to follow a service from a gleam in some marketing type’s eye to the financial realization of the opportunity by widespread deployment and use.  There are some general points that seem valid, though.  The lower-numbered approaches are adequate for services that have few elements and are largely static in their composition, particularly when the element domain management (see below) is self-contained.  The more dynamism and scale you introduce, the more you need to think about moving toward the higher-level models.

Element domain management is important to the service management approach because many service lifecycle systems are really manager-of-manager technologies for the good reason that there are already tools to manage router implementations of VPNs or deployment of virtual functions.  DevOps tools offer at least some level of element domain management for deployed components, providing that they support event-handling (which the major ones do) and that they’re used carefully.  The real role of declarative DevOps as a broad cross-application-and-service strategy, is another question mark in this space, but I think it will evolve into one of the four approaches I’ve outlined, most likely the third.  DevOps models not a service or application but a process of deployment, which makes its approach similar to a hierarchical model.

Given the importance of a specific approach to lifecycle automation, and the fact that there are clear impacts of any approach on the kinds of services that could be deployed, more clarity is needed here.  I think that every vendor who purports to have orchestration/automation should be asked to frame their solution in one of the four solution ranges I presented here.  I’m going to do that for any that I blog about, so if you want to brief me on your solution expect to have that question asked.  It’s time we started getting specific (and more useful) in how we look at these products, because understanding their fundamental approach is the only pathway to understanding their fundamental value.