Evolving Principles for Service and Application Lifecycle Modeling and Automation

Applications aren’t possible without application development, and in today’s hosted-feature age, neither are advanced services. That makes the question of how to implement edge and telecom applications critical, but it’s a difficult question to answer. Applications will typically have an optimum architectural model, set by the way the application relates to the real world. That model can normally be codified using a combination of middleware tools, and those combine to create a programming/development model. Obviously, there are many possible options once you’ve set that optimum architectural model, and I want to open an exploration of what some options look like, or how they might develop.

I have proposed that edge computing, IoT, telecom, and many other modern tech initiatives are all about processing events, and that the event-centricity of these initiatives differs significantly from the transaction-centricity of traditional application development. Most recently, I suggested that the best approach to implementing an event-centric application was to view it as a hierarchical state machine or HSM. That could be the “optimum architectural model”, but what about the rest?

Let’s start by saying that network (and most IT) services are a composition, a cooperative grouping of “sub-services” or “service elements”. In most cases, these service elements are differentiated/delineated by administrative and/or control boundaries. That means that there is generally a control point associated with a collection of technology, and managing that collection is done through the associated control point. This is the structure that the TMF’s SID data model envisions, where a “product” (what they call a service) is divided into Customer-Facing Services (CFSs) and Resource-Facing Services (RFSs).

Data models don’t implement, but the TMF, in the mid-2000s, launched another project, the Service Delivery Framework or SDF. I was involved in that project as an (at the time) TMF member, and it was there that I first encountered the NGOSS Contract work of John Reilly that I often mention. However, SDF was also an architecture not an implementation, and at one point in the project, a European Tier One contacted me, representing a group of five. They were concerned about how SDF was developing, not as a concept but as a model for software.

“I want you to understand that we stand 100% behind implementable standards,” the Tier One told me, “but we’re not sure this is one of them.” I launched the first phase of ExperiaSphere in 2007 to prove that an implementation could be done, and I made a couple of presentations of the result to the group of five Tier Ones and to the TMF. I never said (and still wouldn’t say) that ExperiaSphere was the only way to do composable services with lifecycle automation, but it proved that there was at least one way, and validated an expansion to the NGOSS Contract approach, which was my goal.

What I envisioned in my ExperiaSphere model was a distributable service model that included a “service element” for each composable piece of the service, meaning those sub-services. These service elements would be what today we call intent models, and they would each have a state/event table that associated events to processes. My presumption was that the service model would be centrally administered, meaning that there was a master copy kept in some convenient place, but later I also hoped to support a distributed model, where each element’s model component was kept in a place optimized for the location of the service components that made it up. The distributed element approach would require that each element “knows” the location of the superior and subordinate elements associated with it, since those elements can exchange events.

Generally speaking, the control points associated with each service element would be connected to the resources that were committed to each of their services, meaning that they would likely receive the events generated by conditions/changes in those resources. If we envisioned a service made up of a single element, then, in ExperiaSphere, that element would process its own internal events and there would be only one event exchanged, which would be the state changes in that service element, communicated up to the top service-model level.

What happens inside one of the service elements, then, is opaque except for any state change events it generates, upward to its superior element, or commands issued downward in event form, to subordinate service elements. ExperiaSphere doesn’t dictate that its state/event approach be followed inside the intent-modeled service elements, only that it be followed for events that are exchanged among the service element processes. However, the state/event model could be applied within a service element to handle locally generated events.

When an event occurs, it would be processed by the specific element responsible for it, in either the centralized or distributed location assigned. The event and current state would identify the process to be invoked, and that process could then be run in a place that’s optimized for what it’s going to do. Thus, even a centralized service model process could invoke a process that’s distributed to the point where it’s acting on local resources, or optimally localized resources. Given that, it would also make sense to distribute the processing of service elements, not just the execution of the state/event-defined processes. A service data model would then be dissected when instantiated, with the service element pieces distributed to process points proximate to the control points for the resources being committed. If a service had a core VPN element and five access-network elements, its data model would have six service elements, and each would be dispatched to a host close to where the resources represented could be controlled.

When running in these distributed locations, the service element process would both generate and field events to/from both the superior element (the one above, in the hierarchy, which in my VPN example would be the service itself) and to subordinate elements. These events would be handled by state/event tables for each element, and so each element is a finite-state machine (FSM). However, the service overall is a hierarchical state machine (HSM) because events couple the service elements.

In the original ExperiaSphere project, where the concept was actually implemented in Java, I implemented a specific class of service element, called “ControlTalker” that represented the actual control point interface and was responsible for commanding actions and receiving events. ControlTalkers were designed to be distributed to the area of control points, and to be linked to services using “late binding” so the service data model wasn’t bound to a specific resource set. This ControlTalker had a specialized state/event table, as necessitated by the fact that the events and states would vary depending on the control point interface. All other service elements were represented by “Experiam” objects, and these all had a common event structure (a Java enumerated type), that ControlTalkers also had to support in addition to their specialized events. For Java programmers, the ControlTalker class extended the Experiam class.

Model-handling in this project was the responsibility of a “Service Factory” (which of course was another class). A Service Factory was capable of supporting a set of services, and any Service Factory instance could process events for any service type it supported. In this implementation, though, there was no attempt to break up the service data model and dispatch pieces of it to different factories; a central factory controlled all the service actions for a given service. What I was working on with ExperiaSphere would add the model-handling distributability, so that any given service element and any number of its subordinate elements could be separated and handled local to the control point.

It was clear from the testing of my Java implementation that the ControlTalkers were the most implementation-critical elements of the whole application. State synchronization and command/response exchanges among the service elements, representing as they did service commands or conditions, were relatively rare and could probably be run even as cloud serverless events in many cases. ControlTalkers had to deal with real-time events from real-world devices, and their performance had to keep up with that real world. However, it seems likely that if you had a capable platform to implement ControlTalkers, it would be fine for implementing the other Experiams that operated on service-level events.

Obviously it was possible to write Java code for this, not relying on any specific tools, but it’s unnecessarily complicated when tools to facilitate things are available. In the second phase of ExperiaSphere, I elevated the approach to remove a specific implementation, but looked at using the OASIS TOSCA model as a service model. TOSCA now has some features that would support an implementation, but it doesn’t have the explicit state/event facilities I wanted. A better approach would be to find something that could be used to implement the distributed service element model-handling (the state/event processing), and another (the same, related, or different) to use to author and host the processes identified in the state/event tables.

That’s what I’m exploring now, and I’d like the approach to be suitable for generalized edge applications too. I’d like to be able to round out the concept of event-driven applications by being able to identify specific, optimized, implementation tools, making lifecycle operations automation, 5G, and so forth, an application of edge computing. When I’ve lined up an approach that conforms or can conform to my ExperiaSphere framework, looks sound, and open, I’ll do a blog on it.