Everyone knows that operators and enterprises want more service automation. Some classes of business users say that fixing mistakes accounts for half their total network and IT TCO, in fact. Nobody doubts that you need something to do the automating, meaning that software tools are going to have to take control of lifecycle management. A more subtle question is to define how these tools know what to do.
Current automation practices, largely focused on software deployment but also used in network automation, is script-based. Scripting processes duplicate what a human operator would do, and in fact in some cases are recorded as “macros” when the operator actually does the stuff. The problem with this approach is that it isn’t very flexible; it’s hard to go in and adapt scripts to new conditions or even to reflect a lot of variable situations, such as would arise with widespread use of resource pools and virtualization.
In the carrier world, there’s been some recognition of the fact that the “right” approach to service automation is to make it model-based, but even here we have a variation in approaches. Some like the idea of using a modeling language to describe a network topology, for example, and then have software decode that model and deploy it. While this may appear attractive on the surface, even this approach has led to problems because of difficulties in knowing what to model. For example, if you want to describe an application deployment based on a set of software components that exchange information, do you model the exchanges or model the network that the components expect to run on? If the former is done you may not have the information you need to deploy, and if the latter is done you may not recognize dependencies and flows for which SLAs have to be provided.
Another issue in model-based approaches is the fact that there’s data associated with IT and network elements, parameters and so forth. For service providers, there’s the ever-important operations processes, OSS/BSS. You need databases, you need processes, and you need automated deployment that works for everything. How? I’ve noted in previous blogs that I believed that the TM Forum, years ago, hit on the critical insight in this space with what they called the “NGOSS Contract”, which says that the processes associated with service lifecycle management are linked to events through a data model, the contract that created the service. For those who are TMF members, you can find this in GB942.
The problem is that GB942 hasn’t been implemented much, if at all, and one reason might be that hardly anyone can understand TMF documents. It’s also not directly applicable to all the issues of service automation, so what I want to do here is to generalize GB942 into a conceptual model that could then be used to visualize automating lifecycle processes.
The essence of GB942 is that a contract defines a service in a commercial sense, so it wouldn’t be an enormous leap of faith to say that it could define the service in the structural sense. If the resources needed to implement a service were recorded in the contract, along with their relationship, the result would be something that could indeed steer events to the proper processes. What would have been created by this could be seen as a kind of dumbed-down version of Artificial Intelligence, which I propose to call structured intelligence. We’re not making something that can learn like a human, but rather something that represents the result of human insight. In my SI concept, a data model or structure defines the event-to-process correlations explicitly, and it’s this explicit-ness that links orchestration, management, and modeling.
Structured intelligence is based on the domain notion I blogged about earlier; a collection of elements that cooperate to do something create a domain, something that has established interfaces and properties and can be viewed from the outside in those terms alone. SI says that you build up services, applications, and experiences by creating hierarchies of these domains, represented as “objects”. That creates a model, and when you decide to create what you’ve modeled you orchestrate the process by decomposing the hierarchy you’ve created. When I did my first service-layer open-source project (ExperiaSpere) five or six years ago, I called these objects “Experiams”.
At the bottom of the structure are the objects that represent actual resources, either as atomic elements (switches, routers, whatever) or that represent control APIs through which you can commit systems of elements, like EMS interfaces (ExperiaSphere called these “ControlTalker” Experiams). From these building-blocks you can structure larger cooperative collections until you’ve defined a complete service or experience. ExperiaSphere showed me that it was relatively easy to build SI based on static models created using software; I built Experiams using Java and called the Java application that created a service/experience a “Service Factory” because if you filled in the template it created as it was instantiated and sent the completed template to the Factory, it built the service/experience and filled in all the XML parameters needed to manage the lifecycle of the thing it had build.
Static models like this aren’t all that bad, according to operators. Most commercially offered services are in fact “designed” and “assembled” to be stamped out in order form when customers want them. However, the ExperiaSphere model of SI is software-driven and less flexible than a data-driven model would be. In either case there’s a common truth, though, and that is that the data/process relationship is explicitly created by orchestration and that relationship then steers events for lifecycle management.
I think that management, orchestration, DevOps, and even workflow systems are likely to move to the SI model over time because that model allows you to easily represent process/event/data relationships because it defines them explicitly and hierarchically. Every cooperative system (every branch point in the hierarchy) can define its own interfaces and properties to those above, deriving them from what’s below. There are a lot of ways of doing this and we don’t have enough experience to judge which are best overall, but I think that some implementation of this approach is where we need to go, and thus likely where a competitive market will take us.