Universal Modeling: The Service Side

I’ve long been of the view that the best way to manage a network service through its entire lifecycle is to model the service and base software on state/event behaviors defined within the model. I’ve also dabbled with the use of models in defining social/metaverse and IoT applications. Through all of this, I’ve been increasingly interested in whether we could generalize the modeling and associated software, which would simplify the creation of all these applications, make them more open, and help define a “platform” of features that might then underpin edge computing.

For network services and applications, I’ve long favored an approach that models the service/application as a hierarchical set of intent models that first represent features and eventually deployments. This model structure, being a “model of intent models” that looks like an upside-down tree or organizational chart, would not attempt to represent the deployment as a set of real-world resources. Instead, the details would be contained within the “black box” of the lowest-level (deployment-level) intent models on each branch.

For metaverse applications, and for IoT, I’ve speculated that the best approach would be to create a “digital twin” model, meaning that the model would create a virtual-world framework, some or all elements of which were synchronized with real-world behaviors. This, on the surface, seems completely at odds with the hierarchical, feature-centric, model of services and applications.

Is there any chance of, or value to, harmony here? In order to figure that out, we need to look at why those two model options exist, why those different classes of applications/services seem better served by different approaches. I’ll start in this blog with service/application modeling, which I’ll address using service examples. In the next blog, I’ll look at the digital-twin approach that seems best for IoT and metaverse, and finally I’ll try to find some common ground. OK? Then let’s get to it.

If you look at a service or application from the top, you would see it as a collection of features, presented through an “access interface” that might be a physical interface (Ethernet, for example) or an API. You could also view this as the “commercial” view, since arguably the price of a service could be based on the sum of the prices of the “features” that make it up. That, as it happens, is somewhat consistent with the TMF’s approach in their SID model, which has a “customer-facing service” or CFS component to what they call a “product”, which is what most of us would call the retail service.

A feature-composite view, of course, isn’t going to actually get the service delivered. For that, you have to commit the features to infrastructure capable of supporting each, and consistent with the policies that the operator has established and perhaps guaranteed to the customer via an SLA. If we’re going top-down, then the next step is to complete that mapping to infrastructure, and there are multiple ways in which that could be done.

The “classic” approach would be to assume that the features were defined by a high-level intent model, and that the mapping process consisted of the decomposition of that model into subordinate elements. That decomposition would continue downward until the interior of the model element under inspection wasn’t another set of lower-level models but an actual infrastructure binding/mapping. This would result in a per-feature implementation, and it seems to work pretty well for many of the services and applications we’d naturally think of.

In this approach, the deployments defined by the leaf elements are responsible for delivering what their parameters describe, which is a given SLA at a given set of interfaces. If something breaks, they’re expected to remedy that something or to report a fault to the “branch” that’s their parent. That branch would then have to recompose its subordinate elements to fit its own SLA and interfaces, and if it could not, it would kick things upstairs again.

Let me offer an example to make what I think is a complicated and subtle point clearer. Let’s suppose we have a service that consists of a SASE feature and a VPN feature. We would model it from the top as a singular VPN instance and an SASE instance for each network service access point. So far, this is the same way the classic model might see it.

Now, though, it comes to the decomposition of those two elements to support an order, which for simplicity’s sake we’ll say has three sites. We can assume that there is one option for the VPN, a “coercive” provisioning of the VPN using the IP network management interface and another that is an SD-WAN VPN. We can also assume that there are three possible SASE models, one that uses the cloud to host an SASE feature instance near the user, one that deploys SASE onto a uCPE device already in place or shipped, and a third that deploys a custom SASE device to the user premises.

The first question would be whether the VPN should be MPLS or SD-WAN, and that determination might be made explicitly in the order, based on the current access technology available at the sites, or based on SLA parameters in the order. If an SD-WAN option is available, then the policy of the provider might be to deploy SD-WAN as a feature using the same device as would be used for SASE. In that case, the decomposition of the VPN feature would generate a model that calls for uCPE on the premises, and shipment of such a device to any sites that didn’t have it already. From that point, it would call for the uCPE element to be used to host both SASE and SD-WAN. These decisions would be expressed by the creation of a deployment model that would have three hardware elements (the uCPE), and seven feature deployments (SD-WAN and SASE in three sites plus the overall VPN feature, which would represent the SD-WAN “network”).

The point here is that we compose a service from features, which eventually decompose into abstract models that define how a feature is deployed and managed. The hierarchical model doesn’t need to describe that, only be able to commit it onto resources. Every “leaf” in such a model is a deployment, and every deployment is totally virtual—represented only by its outcome and not its details.

In a conversation I had with the late (and great) John Reilly on the TMF SID approach, John suggested that this approach could be managed with the SID model. I’ve also been told by TOSCA expert Chris Lauwers from Ubicity that the TOSCA model could be made to work this way too. My only concern is that while either approach might be viable, neither might be optimal without some specific changes in those strategies. I modeled services using XML in ExperiaSphere because it was easier to mold to the specific needs I had, but using XML would mean writing model management for the approach to parse the XML.

I’d have liked to see the ONAP people take up the modeling issue, but they should have started with that approach and didn’t, and I finally told them I’d not take release briefings from them until they implemented a model-driven approach. I never heard from them again, so I presume they haven’t.

This feature-hierarchy modeling and decomposition approach works for services and applications that can be considered to be built from “virtual” elements, black boxes whose contents are unknowable and unimportant. The first of the two modeling approaches I opened with, the digital-twin approach, really isn’t helpful for this mission because of that black-box nature of virtual deployment. The real world is only a temporary mapping. For IoT and social-media metaverses, though, we have to know the specific mapping between the real world and the virtual, because the application, through the model, would control the virtual world from the real one, and perhaps vice versa.

Next week, I’ll explore what the digital-twin approach would mean, deriving its modeling as I’ve done here, and we’ll see if there are any elements of the model or its associated software that could be reused. Is there one model to rule them all? We’ll find out.