Edge Computing Part 4: Adapting Current Technologies to the Edge

What can we learn, if anything, from those applications already considered “edge computing”? We have cloud applications that are event-driven today, and some of them aren’t even IoT. We also have some applications that might not seem event-driven at all. Obviously, we have local edge computing in place for most of the “IoT” applications enterprises run today. If the popular notion of edge computing, which is “edge as a service” is to be realized, we clearly have to go beyond that which is already done. That doesn’t mean that the evolving edge model should ignore the opportunity to support and improve current applications.

Today’s IoT applications fall into two categories, those that represent traditional-industry, fixed-facility, deployment of IoT elements, and those that represent either mobile IoT elements or IoT elements that represent high-value, low-density, objects. The former are almost always linked to local controllers and local edge hosting components that provide a very short control loop. These may then be linked back to cloud or data center applications that can journal events, process transactions, etc. The latter are now divided between “phone-home” missions that report condition exceptions or occur at regular intervals, contacting the data center, and cloud-connected event handling.

One thing that, according to users themselves, characterizes these current applications is that there is a limited event volume, either because events are naturally infrequent or because local processing is doing some form of summarization. It is true that some fixed-facility IoT applications will journal all events, but often this takes place locally and is transferred in batch form rather than in real time. Users will ensure that any real-world control generated by an event is quickly fed back, avoiding process paths that add a lot of latency. Thus, applications are designed to push any real-time processing closer to the event source.

Right now, these applications face what might be called a “latency dead zone”. Local processing of events will produce a very low latency, but obviously don’t use hosting in a cost-effective way. If you pass through local processing, the next available step (cloud or data center) has a significantly higher latency associated with it, so if it’s necessary to push real-time processing closer to the event source as just suggested, then pushing it all the way to a local edge may be the only option. Edge-as-a-service could offer an intermediate hosting point and fill in the dead zone.

Stepping outside IoT, we have two broad application classes that are often paired with cloud-hosted event processing, and which aren’t real-world/real-time related. One is log analysis and the other is video analysis, and they obviously represent a class of applications (the “analysis-source”) that might also be the way that some local-edge IoT applications communicate inward toward core applications.

The need to keep control loops short is the primary driver of moving things toward the edge. When edge analysis is performed, it allows summary information to be taken deeper, insulated latency issues, the need to preserve relative timing, and potentially high volumes of events that could drive costs up. Applications like log and video analysis are among the major drivers of cloud event processing and serverless computing today, but these applications have been visible/available for years and have not revolutionized or galvanized edge computing. For that, I believe, you still would need IoT, but perhaps a different view of IoT than we have today.

The great majority of IoT applications today are highly confined, and that means that they are well-served by the “local edge”, computing resources located on premises and close to the event sources. The latency dead zone I referred to is closed off by the suitability of these local edge resources, for the simple reason that there’s enough event generation in one place to justify that local processing. We could surmise (but I can’t prove it) that applications that aren’t well-suited for local edge processing, ones that need edge-as-a-service, can’t develop today because the facilities aren’t available. One potential proof point of that claim is the “smart city” concept.

A metro area is clearly a distributed real-world framework, but there are few examples of smart-city technology that even approach the digital-twin model. We have some examples of metro traffic control, metro power control, and so forth, but not of a single system that embraces all the real-world things we’d like to manage. In effect, we’ve substituted application-class segregation of event processing for facility segregation. That suggests that there is indeed a lack of an overall model for digital twinning.

What about serverless orchestration tools like Amazon’s Step Functions? Microsoft and Google both have equivalent capabilities, so this represents the cloud provider way of handling the orchestration of event-driven tasks. The tool is best visualized as a graphic representation of just what Amazon’s name for it suggests, the steps associated with some event process. You can define a linear stepwise sequence, branching, and so forth, and a step function can (as a step) activate another step function by generating a trigger event.

For digital twinning, it’s the step-within-step capability that would be important. A complex system could be defined by having each of the simple pieces first defined as steps, and then combining/uniting the steps using trigger events. Or it could be in theory; it’s not clear whether this would be practical because of a combination of difficulty in visualizing the whole picture, and difficulty in maintaining all the states of the individual steps and the collective system. This is pretty much the same issue that would impact CEP software.

Note that I didn’t cite the serverless relationship to Step Functions (or its competitors) as a problem. That’s because you can orchestrate containerized elements too. That opens the possibility that some local facility or event-source with edge processing could feed an event to the cloud to trigger correlation there. In other words, you could create individual step orchestrations on local edge processing, and let the complexity of merging them into the real-world system be handled in the cloud.

Step Functions, and other workflow orchestration tools, including Apache Airflow, have a common issue in the fact that, for digital twinning, they don’t have a built-in mechanism for keeping track of trigger events that link the individual atomic systems/steps. If a given piece of the digital real-world twin has to generate an event to notify another part, whether higher-level or not, that piece is outside the scope of the systems. You can generate these events in your processing, but there’s no easy way to track the way the whole structure of steps is linked, other than manual. That means the process will be difficult to scale up without major risk of errors.

We can do better, I think, by thinking of the entire digital twin as a graph of a set of finite-state machines (see this excellent article on using FSMs for event-driven applications) representing the systems that make up the real-world target. Traditional FSMs can’t handle a complex real-world digital twin any better than a step-function-like system would, for the same reason. However, there’s such a thing as a hierarchical state machine (HSM) that could fit the bill, and a few open-source projects support HSM in some form.

If you look at the HSM reference in the last paragraph (and you understand software), you’ll likely see that while HSM would work to model a real-world system, the tool would have to take major steps to allow such a model to be created and maintained without extensive work and software knowledge. If there’s any such tool on the market today, I can’t find it.

I have to admit that I still prefer the notion of adapting the TMF NGOSS Contract data-model-driven event-to-process mapping as the basis for creating digital twins. The model would represent the elements of the twin, and each element would have an independent state/event table that would decide how to process events and also when to signal an event to another element. The problem is that nobody really implemented this approach, and it postulates a hierarchical structure of elements rather than a generalized graph that would allow any element to signal any other via an event. The HSM approach would seem to address this, but with a level of complexity that I think would need to be resolved with a specific software package.

Where does this leave us? We have tools today that could be said to light the way toward a digital twinning approach to edge computing applications, presuming they were event-driven. Lighting the way doesn’t get us there, though. As things stand, we are at risk for two bad outcomes. First, multiple edge application vendors could take totally different approaches, which risks the operational efficiency of the edge. Second, and worse, no suitable strategy might emerge at all. That could mean that it will be difficult for edge computing to meet our expectations.