Is a Unified Model for Lifecycles and Real-Time Processes Possible?

In the last two blogs on modeling, I reviewed “service modeling” and “virtual-application modeling” and determined that the digital-twin approach wasn’t optimum. I then reviewed the metaverse and IoT applications’ use of modeling, and determined that the hierarchy/intent approach wasn’t optimal. This would seem to argue that there are two missions and two models, and never the twain shall meet. But is that true? That’s what we’ll address in this blog, and while I said this would be a three-blog series, it’s going to need a fourth.

In the first blog of this series, I developed the view that the hierarchy/intent model, the one similar to the TMF’s SID, was well-suited for lifecycle management of service or application elements. It did not address the functional flow of the service or application, but the management of the application’s components.

In the second blog, which focused on “digital twin” modeling, the view I expressed was that this model did essentially the opposite; it focused on the functional flow rather than on component management. That makes it well-suited for both social metaverse and IoT missions.

If we start with these assumptions, then the first question we’d have to ask is whether the “natural fit” model for one of the missions could be used effectively for the other. The second would be whether there was a mingling of missions that might justify a model with features of both digital-twin and hierarchy/intent. So, let’s ask.

Let’s start with the first question. I do not think that a hierarchy/intent model, which is focused on lifecycle management, could make a useful contribution to modeling the functional flow of social metaverse or IoT. Yes, it could manage the deployment of the elements, but since the lifecycle management of something that’s a functional flow has to look at the functions, I do not believe that the hierarchy/intent model is adequate. I think it’s clear that those applications mandate functional modeling, and that functional modeling mandates a digital-twin approach because the model has to drive things that link to the real world.

Could the digital-twin model be effective in lifecycle management? That turns out to be complicated. If we presumed that we wanted was the kind of successive decomposition that the hierarchical/intent approach offers, what would be required would be an initial model of “features”, which we could visualize (using the example from the first blog) as a circle representing “VPN Service” and a series of connected circles representing “Access”. What would then be required would be the successive “decomposition” of these circles, as we did with the other model.

Where the “complicated” arises is if we assume that what we have at any point represents something explicit in the real world. If we let this decomposition run rampant down to the device level, we’ve created a model that is not only totally brittle (any change in network routing would require it be redone), it would also be non-representative, since the structure of a network should never be visible below the level where you’re applying control. If you don’t provision individual routers to set up a service (that would be done, to the extent it’s done at all, by the management system), you shouldn’t try to portray individual routers in the model. That means that logically, the “digital twin” you’d be creating would be a twin of the relationship between the administrative elements and not of the devices.

It would also be true that there would be elements in the model, like the “VPN Service” element, that wouldn’t be a digital twin of anything. In a sense, these would be the equivalent of “locales” in my social-metaverse-and-IoT model. You’d decompose a “locale” as a container of lower-level things, and through successive decompositions along that route, you’d end up with an administrative element that would actually be a digital twin.

We’re not done with the complication, though. The digital twin model, recall, models functional flows in its primary mission. These digital twins don’t do that at all, they model only administrative elements, meaning “control flows”. This is a fundamental difference on the surface, but fundamentals are in the eye of the beholder here; we’re talking about what happens in the metaverse/IoT mission, and the question isn’t whether the missions are the same but whether the modeling strategy would work.

I think it would, but let’s see if we can prove it. The “VPN Service” object in the hierarchy/intent model is, when it’s triggered by an event (“Service Order” for example), would take the order parameters and select a decomposition that matched in requirements, based on a policy. That would cause the model for that decomposition option to be instantiated, and the objects in it would receive “Service Order” events. Thus, we can visualize this as a state/event/driven process, which I’ve always suggested we do. There is no reason why a VPN Service object in a digital-twin model couldn’t do the same thing, providing that it had state/event capability.

But can we visualize metaverse or IoT models in state/event terms? If we can, then a lot of the logic associated with the two missions would be represented in a common model, using different state/event tables and activated process sets. If not, then the baseline software for a metaverse or IoT model would have to be different.

If we’re modeling the functionality of a virtual world, the key “event” is the passage of time. We can imagine that there exists a “metaverse clock” that’s ticking off intervals. When an avatar receives a “tick event”, it would apply whatever behavior policies were associated with that event. If a locale received a tick event, it would update the three-dimensional model of the space, the model from which every viewpoint would be generated. We could draw some inferences on how the avatar would manage behavior policies (active behaviors would each be “sub-objects” just like a decomposition subordinate in the hierarchy/intent model) and they would receive tick events from the parent, which would allow them to change the avatar’s three-dimensional representation appropriately.

We could also optimize this approach a bit. Let’s assume that there’s nothing going on in a room. It doesn’t hurt to “tick” it, but if you assume the room contains a bunch of avatars, the room doesn’t “know” whether those avatars have something pending or not. But suppose that the avatars in a locale “registered” with the room that they had an action pending. We could “tick” the room and have the room pass along the “tick” to registered avatars, which would then allow them to trigger their behaviors, and then the room could remake its model of the space for visualization. The avatars would signal the room through events, in order to register for a tick. I’m not trying to write an entire application here, just illustrate that we can make both processes work as state/event-driven as long as we define states, events, and intersecting processes effectively.

OK, we’ve reached a conclusion, which is that a common software and model framework could be used to describe both a service management application and a metaverse/IoT application. You may, in the course of digging through all of this in a three-blog series, lost track of why you care, so let’s end by making that point.

Service management, application management, social metaverse, and IoT are the primary credible drivers of edge computing. Right now, we tend to think of edge computing as differing from cloud computing only by where the hosting happens to be done. I submit that the majority of cloud applications today are really build on a “platform-as-a-service” framework of web services, designed to simplify development, manage resources effectively, and optimize the experience overall. If all credible edge applications could be built on a common model and baseline software architecture, that combination could become the PaaS of the edge. If we somehow came up with a standard for it, and required that for all edge hosting, we could create an edge computing framework that allowed free migration of application components. That’s a pretty worthy goal, and I think it could be achieved if we work at it.

That seems like a conclusion right there, so why blog number four? The answer is that the digital twin model might have a more specific role, with more specific benefits, in network management applications. The difference between service and network management is that the latter is the management of real devices. An operator would normally have to at least consider having both. So for my last blog in this series, I’ll explore whether a digital twin model would be a benefit for network management missions.