Carrier Cloud and Edge Computing Connections

Recently, my blogs on carrier cloud have focused on the question of whether a trend among operators toward outsourcing public cloud services to a cloud provider will impact carrier cloud.  A related question, raised in this Light Reading piece, is whether it could also impact edge computing.  An element of that question is whether edge computing has “primary benefits” that might actually influence telcos, cloud providers, or both to fight for it.

In our hype-driven age, we often find that new technologies are justified only by being new, by being the logical extension of or progression from current technology.  5G is an example; we have a clear justification for evolving, but principally to increase capacity and spectrum utilization.  What might be done to generate more wireless revenue based on 5G is an open question.  We need a “primary benefit”, something users are prepared to pay incrementally to do, in order to pull 5G through faster or in more advanced (5G Core) form.

Edge computing can be linked to things like 5G, but if that’s the path to justifying it then we’d need to either prove that 5G itself needed or significantly benefitted from edge computing, or that some 5G primary benefits had that positive relationship with the edge.  What’s the real situation, at least as much of it as can be seen now?

There is little doubt that consolidation of computing resources, whatever the application, results in greater operations and capital efficiency.  My models have generally shown that computing concentrated in metro data centers would usually generate enough efficiency that further concentration (regional data centers) wouldn’t add much.  The question in edge computing is whether dispersing computing further out would take away much, and if so whether there’d be a justification.

It’s hard to model data center efficiency to account for all the variables, but using the US (the market for which I have the most/best data) as an example, my model shows that of the roughly 250 metro areas, about 180 are dense enough to say that there would be a negligible efficiency difference if we dispersed computing to a level of as many as 10 “edge” sites in the metro area, and just short of 100 would justify over 20 edge sites.  We’d end up with about 1,300 edge data centers if we wanted to disperse at no significant efficiency risk.

There are roughly 12,000 “edge office” locations in the US, places where access lines terminate and where network operators (telcos or cablecos, mobile operators, etc.) have real estate to offer.   That’s about 10 times the number of edge locations that could be deployed without loss of efficiency, which means that to reap full edge-hosting deployment even to edge offices would require a driver.

The most credible benefit of edge computing is latency control.  The speed of data in an optical pipe is about a hundred thousand miles per second, which means that one mile is transited in 10 microseconds.  If we assumed that access was homed directly to metro data centers, that would add only about a millisecond to over 90% of all metro hosting applications.  If we were connected through multiple packet devices, the addition is much greater, on the order of 25 milliseconds.

There are applications that could be sensitive to one-way delays of that magnitude.  Gaming, at least the online multi-player form of gaming, is the best example.  However, it’s important to remember that the latency of an application is the sum of the latencies of all the connections over which the application exchanges data, either with users or among component pieces.  The architecture of a game and the nature of the paths between players would determine whether the benefits of lower latency in that first hop really matters.  If you had a game where players were all hosted in the same edge or metro center, it probably would matter.  If the players were internationally distributed, not so much if at all.  Virtual reality as a driver is linked to gaming, the current killer app for VR.  Thus, it has the same issue set.

Autonomous vehicles have been cited as a need for low-latency computing, but that’s only true if you presumed that some central controller (hosted in an edge site) was running all the vehicles within the local geography.  We have a lot of on-ramp projects for self-driving or driver assist today, and all of them rely on local sensors to detect the time-critical events, like an obstacle in the path of the vehicle.  There might be opportunities for autonomous vehicles to exploit edge computing, but not to drive it.  You’d need to somehow get vehicles depending on the edge to deploy with or before the edges they depend on.

Augmented reality is a credible driver, primarily because it requires that any “augmentation” be linked precisely with the “reality” piece.  Further, AR likely depends on analysis of the user’s local context and the context of other users and other things in the area.  The “in the area” part means first that information further from the user than a couple hundred yards would be rarely valuable because the user wouldn’t see it, and second that an area-wide picture of things and users would be sufficient to satisfy AR needs.  This links AR with personalization and contextualization, legitimate “edge” applications because of their local focus.

The Light Reading article talks about Telstra’s experience with cloud-provider promotion of their own edge-hosting capabilities to operators, as an alternative to “self-deployment” of edge computing as part of carrier cloud.  I think that there are a couple useful links between cloud provider interest in the edge and Telstra.

Australia, Telstra’s home territory, has a very low demand density because so much of it has a very low (even zero) population density.  Not only that, most of the population is divided into a small number of metro areas around the major cities, widely separated.  What little data I have on Oz (as the locals call their country) suggests that every metro area would qualify for high-density edge computing simply because it would be easy to accumulate demand within the cities, enough to justify hosting closer to it.

Why would cloud providers be chasing the Oz-edge?  I think the answer is “To keep Telstra from deploying” more than “To exploit AR (or other) near-term opportunities.”  The most dangerous thing for a cloud provider is more competition.  The most dangerous form of competition is competition for the edge, because operators have a natural real estate advantage there with their access termination points.  If Telstra or any operator deploys data center hosting in edge offices, those in-place resources could open consideration of services to exploit them.  If those edge resources don’t require a lot of justification, they could deploy in advance of demand, and then create the demand.  That would put the cloud providers on the defensive.

On the other hand, if operators outsource their hosting, including edge hosting, then cloud providers build the resource pools at the edge, cloud providers get to exploit them with applications that can utilize but not justify edge resources, and cloud providers keep others out of the market.  Wise choice, and perhaps a critical one for edge computing.

Operators, overall, are very antsy about investing in their own cloud infrastructure at this point.  Left to their own devices, they could decide to let the old earth take a couple of whirls, as the song goes.  OTT competition, even prospective competition, could turn them toward the edge investment, and if the kind of partnership that was proposed to Telstra is taken up by cloud providers elsewhere, it could mean that we get the edge deployments we’ve been wanting.

Still, edge enthusiasts shouldn’t get cocky here.  There is still no strong business case for massive, or even moderate, edge deployment.  If there were, everyone would be running out to deploy.  If early initiatives, whether from operators or cloud providers, fail to realize any of those possible opportunities I’ve cited here, we could see the whole thing stall.