Picking the best approach to service modeling for lifecycle management is like picking the Final Four; there’s no shortage of strongly held opinions. This blog is my view, but as you’ll see I’m not going to war to defend my choice. I’ll lay out the needs, give my views, and let everyone make their own decision. If you pick something that doesn’t meet the needs I’m describing, then I believe you’ll face negative consequences. But as a relative once told me a decade or so ago, “Drive your own car, boy!”
I’ve described in prior blogs that there are two distinct layers in a service model—the service layer and the resource layer. The need to reflect the significant differences in the missions of these layers without creating a brittle structure effectively defines a third layer, the boundary layer where I’ve recommended that the actual process of laying out the abstractions to use should start. I’m going to start the modeling discussion at the top, though, because that’s where the service really starts.
The service layer describes commercial issues, parameters, elements, and policies at the business level. These models, in my view, should be structured as intent models, meaning that they create abstract elements or objects whose exposed properties describe what they do, not how. The beauty of an intent model is that it describes the goal, which means that the mechanisms whereby that goal can be met (which live within the intent model) are invisible and equivalent.
I’ve done a fair amount of intent modeling, in standards groups like IPsphere and the TMF and in my own original ExperiaSphere project (spawned from TMF Service Delivery Framework work, TMF519), the CloudNFV initiative where I served as Chief Architect, and my new ExperiaSphere model that addressed the SDN/NFV standards as they developed. All of these recommended different approaches, from TMF SID to Java Classes to XML to TOSCA. My personal preference is TOSCA because I believe it’s the most modern, the most flexible, and the most complete approach. We live in a cloud world; why not accept that and use a cloud modeling approach? But what’s important is the stuff that is inside.
An intent model has to describe functionality in abstract. In network or network/compute terms, that means that it has to define the function the object represents, the connections that it supports, the parameters it needs, and the SLA it asserts. When intent models are nested as they would be in a service model, they also have to define, but internally, the decomposition policies that determine how objects at the next level are linked to this particular object. All of this can be done in some responsive way with any of the modeling approaches I’ve mentioned, and probably others as well.
When these models spawn subordinates through those decomposition policies, there has to be a set of relationships defined between the visible attributes of the superior object and those of its subordinates, to ensure that the intrinsic guarantees of the abstract intent model are satisfied. These can operate in both directions; the superior passes a relationship set based on its own exposed attributes to subordinates, and it takes parameters/SLA exposed by subordinates and derives its own exposed values from them.
It follows from this that any level of the model can be “managed” providing that there are exposed attributes to view and change and that there’s something that can do the viewing and changing. It also follows that if there’s a “lifecycle” for the service, that lifecycle has to be derived from or driven by the lifecycles of the subordinate elements down to the bottom. That means that every intent model element or object has to have a “state” and a table that relates how events would be processed based on that state. Thus, each one has to specify an event interface and a table that contains processes that are to be used for all the state/event intersections.
Events in this approach are signals between superior and subordinate models. It’s critical that they be exchanged only across this one specific adjacency, or we’d end up with a high-level object that knew about/from something inside what’s supposed to be an opaque abstraction. When an event happens, it’s the event that would trigger the model element to do something, meaning that it’s the event that activates the lifecycle progression. That’s why this whole state/event thing is so important to lifecycle management.
A service model “instance”, meaning one representing a specific service contract or agreement, is really a data model. If you took that model in its complete form and handed it to a process that recognized it, the process could handle any event and play the role of the model overall. That makes it possible to distribute, replicate, and replace processes as long as they are properly written. That includes not only the thing that processes the model to handle events, but also the processes referenced in the state/event table. The model structures all of service lifecycle management.
It’s easy to become totally infatuated with intent modeling, and it is the most critical concept in service lifecycle management, but it’s not the only concept. Down at the bottom of a tree of hierarchical intent models will necessarily be something that commits resources. If we presume that we have a low-level intent model that receives an “ACTIVATE” event, that model element has to be able to actually do something. We could say that the process that’s associated with the ACTIVATE in the “Ordered” state does that, of course, but that kind of passes the buck. How does it do that? There are two possibilities.
One is that the process structures an API call to a network or element management system that’s already there, and asks for something like a VLAN. The presumption is that the management system knows what a VLAN is and can create one on demand. This is the best approach for legacy services built from legacy infrastructure, because it leverages what’s already in use.
The second option is that we use something model-driven to do the heavy lifting all the way down to infrastructure. TOSCA is a cloud computing modeling tool by design, so obviously it could be used to manage hosted things directly. It can also describe how to do the provisioning of non-cloud things, but unless you’re invoking that EMS/NMS process as before, you’d have to develop your own set of processes to do the setup.
Where YANG comes in, in my view, is at this bottom level. Rather than having a lot of vendor and technology tools you either inherit and integrate or build, you could use YANG to model the tasks of configuring network devices and generating the necessary (NETCONF) commands to the devices. In short you could reference a YANG/NETCONF model in your intent model. The combination is already used in legacy networks, and since legacy technology will dominate networking for at least four to five more years, that counts a lot.
I want to close this by making a point I also made in the opening. I have a personal preference for TOSCA here, based on my own experiences, but it’s not my style to push recommendations that indulge my personal preferences. If you can do what’s needed with another model, it works for me. I do want to point out that at some point it would be very helpful to vendors and operators if models of services and service elements were made interchangeable. That’s not going to happen if we have a dozen different modeling and orchestration approaches.
The next blog in this series will apply these modeling principles to the VNFs and VNF management, which will require a broader look at how this kind of structured service model supports management overall.