Edge Computing: Part One

While there are surely many contenders for the most critical and hyped technology, edge computing is my choice. 5G depends on the edge, and so does IoT. Many believe it’s the future of cloud computing. Many believe it’s the future of premises computing. Everybody seems to believe that it will make them money, despite the fact that different definitions of “the edge” would foreclose the participation of some of the hopefuls.

For all this believing, though, we don’t have much of a picture about what the edge has to be, and that’s what I’m going to work to correct in a series of blogs. The starting point for this, and my focus here, is the specific issue set that edge computing has to address. I’ll then develop the individual issues and how the market seems to be addressing them.

The first issue for the edge is what differentiates it? There’s not much sense in talking about edge computing if it looks just like other forms of computing, meaning that it has the same financial and technical properties. This is the starting-point issue, and the one we’ll cover now.

Enterprises say that “edge computing” is computing located in “close proximity” to the point of activity the applications being run are supporting. If they’re pressed for why that’s important, they’ll respond that some application missions require low latency, meaning that they require a quick response to a request. IoT is the example they’ll usually offer. If pressed further, they’ll say that it’s likely that edge missions generate events rather than transactions, and that the applications have to be synchronized with real-world conditions. So let’s say that edge computing is computing designed to process latency-sensitive events that couple directly to the real world.

The idea of close proximity implies some geographic placement; we have to know where event sources are in order to know what would be close to them. This introduces the concepts of “event geography”, the distribution of the event sources in space, “event density”, the concentration of event sources in a given geographical area, and “event source mobility”, the extent to which an event source can change locations. These factors are critical in deciding whether edge computing is deployed on the premises or provided as a service.

Where event geography is concentrated, density is high, and mobility is minimal (a factory floor, a warehouse, etc.) we could expect to deploy edge facilities within the location where event sources are concentrated. That means on-premises hosting, likely not in the data center but in a location close to the event source concentration. This is a local edge.

The problem with local edge computing arises when one or more of our three event demographics changes. If event geography is large, if event sources are highly mobile, or if concentrations are low, local-edge processing may be impossible or expensive, in which case we need a different edge model. Just what model is required would depend on just how event demographics changed, and the critical question seems to be the event geography and mobility.

Given that access networks terminate within a metro complex, it seems likely that edge computing as a service would be available at the metro level. This assumption yields two edge-as-a-service models, the metro-centric model and the distributed edge model. Metro-centric edge hosting would support applications where event sources were dominantly located within a single metro area, and if they moved at all, moved dominantly within that area. Distributed-edge hosting then supports event sources that are distributed beyond a metro, or move across multiple metro areas.

The final event processing characteristic we need to consider is “processing depth”, which is a measure of how far an event workflow extends. Note that this isn’t a geographical concept but a workflow complexity concept, an indication of whether an event simply generates a quick response (a control opens a gate) or generates deeper processing (a truck QR code triggers a lookup of load documents, which in turn adjusts warehouse inventory levels).

There seem to be three levels of processing depth that are significant. The first, “local processing”, implies control-loop processes that have action/reaction behavior and no enduring significance. Then we have “journaled processing” where events are still action/reaction but now must be recorded for review and analysis, and finally “transactionalized processing” where the event signals business changes that have to be reflected in other applications.

Local processing, combined with a local edge hosting model, means that the applications don’t really have to conform to any particular architecture, and in fact are likely to be related to the devices involved in creating and acting on the events. Special tools and middleware are less likely to be required.

Where local processing involves either metro-centric or distributed-edge hosting, the applications would have to conform to edge computing development practices, set by the edge hosting provider. If event densities are low, some form of serverless hosting would be appropriate if the cost model worked out, and as densities rise, a container model is likely better.

With journaled processing, enterprises say that it would rarely be desirable to journal events locally except on a temporary basis while sending them to a deeper facility. They also say that journaled events would be collected either in the data center (most likely) or the cloud, and that they could be bulk transported rather than sent interactively. Thus, this model doesn’t introduce a lot of additional architectural requirements or generate a need for specific tools/middleware.

Transactionalized processing is the complicated stuff, because there are a number of very different drivers. The “processing” may involve correlation of events in time and space, linkage to deeper databases and applications, and even extension of the control loop. It may require linking to the cloud, to the data center, to the latter through the former, or to both. Most potential event-driven applications likely fall into this category, though many of the current ones still represent the earlier and easier models.

It’s also transactionalized event processing that introduces the other topics we’ll cover in my series on edge computing. I’ll summarize them here, and take each of them up later on.

The first of the topics is edge portability and hybridization. In transactionalized applications, the edge can be considered an extension of something else, so the obvious question is just how much software architecture at the edge has to draw on the architecture of what it extends. The biggest part of this question is whether we should look at the edge as a service as being an extension of public cloud software models.

The second topic is dynamic edge distributability. In some transactionalized applications, it may be necessary to dynamically distribute components among edge hosting points in response to changes in requirements or movement, addition, or deletion of event sources. This doesn’t include backup functions, but rather changes in the nature of the event mission created by a changing set of event sources. Think gaming, where players might move their avatars into proximity and thus indicate a common hosting point should accommodate the concentration.

The final topic is edge security. Not all edge models have the same security issues, and not all edge security issues will be obvious, particularly in complex distributed-edge situations. The emergence of edge computing as a technology, and IoT and the real-world synchronicity that it brings, could generate massive problems if we’re not careful, and even perhaps if we are.

These are the topics I’m going to address in the rest of this series, but I’ll do so by talking about a set of closely related edge-hosting models and relating the issues to each. The first of the series will be about the edge as an extension of the cloud, and I hope you’ll be interested in the series and will comment freely on LinkedIn. I’ll post the entire series as a CIMI Corporation white paper, available for download (at no cost, obviously) when the series is complete.