Why MEC Might be Getting Crippled by Uncertainty

OK, there’s no lack of (or loss of) interest in multi-access edge computing (MEC), but there’s also no lack of fuzziness over the right way to think about it. A recent Light Reading article, the first of four promised pieces on the cable industry’s view of MEC, offers some insights on the important question of the way network operators might play in the game. It also demonstrates, perhaps less clearly, that there’s a big issue in the way of MEC.

Any topic, however potentially revolutionary, can be exhausted from a media coverage perspective. Virtually all our tech news is ad sponsored, and that means that what gets reported is what gets clicked on. With any topic, click potential is based on news value rather than truth value, and while most people might click on the first dozen or so articles on something like cloud computing, you eventually reach a point where they stop clicking. Some of that is due to the fact that it’s very difficult to write deep stories, and that superficial things get used up quickly. Edge computing is a good example.

The problem with edge computing right now is that even potential suppliers of edge services are having a problem coming to terms with just what it might be good for, and just how it would have to be provided. These topics are complicated to cover, and the truth is that much of the stuff that needs to be communicated would be dull, dry, and targeted to a very limited audience. As a result, cable companies like those discussed in the LR piece are conveying uncertainty as much as insight.

We can divide the concept of edge computing in a number of ways, ranging from how it’s done to who offers it, and on to what it’s good for. I think most everyone would agree that edge computing has to have a distinct mission versus public cloud, and that distinct mission is latency-sensitive applications. You compute close to the source of work because you can’t tolerate delay. That leaves us with the “who” and the “how”, and I think that the “how” is likely, maybe certain, to establish the “who”, but even the “how” dimension has to be divided.

The big division on the “how” is between the vision that the edge is simply the cloud pushed closer to the user, and that the edge is architecturally different. In the transplanted-cloud vision, the features of edge computing are derived from current cloud features, placed so as to optimize latency control. Development of edge applications would follow the pattern of cloud application development, meaning that it would likely be specialized to the enterprises who adopt the model. In the edge-is-a-new-architecture vision, the edge applications rely largely on features that are not present in the cloud, or are present only in a very primitive form. Development of those features then takes away a lot of the heavy lifting in application development. Think of the vision difference as an IaaS versus PaaS or even SaaS approach to MEC.

The Light Reading story cites Heavy Reading research that says that cable companies are thinking of a “hybrid cloud” model, a term I think is unfortunate because it links edge to enterprise cloud usage, and thus really tips the vision of MEC toward traditional cloud computing features. They say that cable companies believe they’ll pick a MEC strategy (build or buy) depending on the application. But to run MEC anywhere, they have to use tools, and if there are tools only to support the cloud-centric MEC vision, then they’ve bought into the idea that their edge computing will be hosted by the cloud providers. That’s problematic for two reasons.

The first reason is obvious; if the network operators don’t build out their own edge hosting, they’re surrendering the market for MEC to the cloud providers, and they’re either a user or a reseller of those services, not a creator. If there is a decisive shift of service infrastructure from simple network switching/routing to hosted features, and if future revenues depend on even higher-level service features, then they’ve thrown away what might well be the last opportunity to stop the disintermediation trend all operators have complained about for decades.

The second reason, believe it or not, could be worse. If edge applications require features that are not part of the cloud-provider web-service inventory, then a lack of those features could impact the pace at which MEC is exploited by enterprises, and there could be a Balkanization of development models and tools that would create silos and prevent the establishment of a single model for developers to learn and operations personnel to manage.

So what are the things that “the edge” might need and that the cloud doesn’t have? I think they’re divided too; this time into charging/cost issues and feature issues.

Edge applications are latency-sensitive, remember? The great majority of latency-sensitive applications relate to supporting some real-time activity, including gaming, IoT, and even “metaversing”. Those applications are almost always event-driven, meaning that they receive some real-time stimulus and process it to create a real-time response. The traditional way cloud providers have addressed this is through “serverless” offerings, which means the users buy processing rather than capacity to process. Everyone who’s tried serverless knows that it’s great when the alternative is having fixed resources sitting around waiting for work, but terrible when a lot of work actually gets presented. The pricing model for MEC, if it’s based on serverless, could end up pricing MEC out of its own market.

This problem could be exacerbated by the fact that serverless computing’s performance (in particular, the latency) lags traditional cloud computing. Users of serverless report load-and-run delays running sometimes to the hundreds of milliseconds, which is way beyond what could be considered “low latency”. Even this problem could be worsened by the fact that edge resource pools are surely constrained by the fact that the addressable market for a given edge site is smaller than the addressable market for a cloud regional hosting point.

The technical problems are also complicated. First and foremost, many edge applications are really multi-edge applications, meaning that they involve event sources that are distributed further than a single edge site could support. What that means is that edge applications could involve the same sort of “near-real-time” or “true-real-time” versus “non-real-time” functional division that we see in O-RAN, only probably with more categories.

Applications like IoT, in its industrial or warehouse or even smart cities, are likely to be suitable for single-edge hosting of the real-time elements, but even these may then have to hand off processing to a deeper resource when the real-time control loops have been closed by the edge. We could envision the processing of events for these applications as a series of concatenated control loops, each touch point representing a place where delay sensitivity changes significantly. But while we might know where application constraints on delay change, we don’t know what the actual delay associated with available hosting points might be. In one situation, we might be able to support Control Loop A at the metro or even neighborhood edge and then hand off Control Loop B to a regional center, and in another we might have to hand it to a close alternative metro point. We need to be able to orchestrate deployments based on experienced delay versus control-loop delay tolerance.

It’s clear from this that having the potential hosting points connected through high-capacity, low-latency, trunks with minimal trans-switching of traffic would be almost essential. Metro area hosting points would surely have to be almost- or fully meshed, and metro areas themselves would also likely have to be meshed, and multi-homed to regional or central points. We aggregate traffic to places where we host functions, not to places where we can jump on a high-capacity trunk to achieve bandwidth economy of scale.

All of this could be especially critical in applications like the metaverse, which requires the coordination of widely separated application elements, in this case each representing an “inhabitant”. A realistic experience depends on having behaviors among interacting avatars synchronized with the person they represent, inhabiting that distributed real world. In fact, the full potential of a metaverse, as I’ve noted in prior blogs, couldn’t be realized without it.

Even IoT applications could be impacted by a lack of edge-specific tools and models. The current “edge” strategy for IoT is to create the edge by pushing the cloud onto the premises, using enterprises’ own servers. This tends to encourage applications that close the most sensitive control loop off before it ever gets off-site, and the public cloud software that’s pulled onto the premises is often “transactionalizing” edge events, recording them after the real-time processing has been completed. If real-time handling is already done before events leave the premises, then there’s latency-critical stuff little left for edge computing to do, and the whole business model of the edge is questionable.

The features needed to make all this work are going to be turned into platform software, PaaS, or SaaS by somebody. Operators, by diddling over the question of whether to build their own cloud or buy into one or more cloud provider partners, are missing the big question. Will they build that platform software? If they don’t, then they absolutely surrender edge differentiation, probably forever.