Is “the edge” an extension of the public cloud? That may well be the biggest question we face in edge computing, because it determines what players are likely to dominate the evolution of edge computing, and who will frame how edge applications are written. Those factors may then determine just how fast we get a viable edge architecture, or whether we ever get one.
Edge computing and edge-as-a-service are related but not identical concepts, just as computing and cloud computing are. There has to be an architectural model for both to guide development, and the model can be obtained by extending the cloud to the premises using cloud-provider tools, or by taking a premises-compatible software suite and building applications that then run as containers or VMs in the cloud. In the former model, cloud providers win, and in the latter case the software providers win.
The winner of a race is the player that gets there first, obviously. Public cloud providers not only think that the edge is a part of the cloud, they’re the only class of supplier in the market that’s actually proactively developing the opportunity. Yes, there are others who are talking about the edge, but they’re really just preparing to take orders, not to sell in a proactive sense. Not so the cloud providers; they’re doing everything they can to own the opportunity.
The basic thesis of the public cloud providers’ edge strategy is the classic camel’s nose, based on the logical assumption that true edge applications don’t exist yet, so whatever builds them will influence how the edge evolves. They are offering a way of hosting their cloud tools on premises equipment, the obvious effect of which would be to facilitate the growth of edge applications that would rely on their cloud APIs. That, in turn, would facilitate their offering actual edge-as-a-service down the line. Whether this would work depends largely on what’s driving a particular edge opportunity, and what competing approaches are known to buyers.
There are two situations where the edge opportunity drivers would lead to a symbiosis between edge and cloud. First, where the buyer already has a strong commitment to the public cloud, which for enterprises likely means that they’ve created cloud front-ends to modernize legacy business applications. Second, where the edge mission lends itself to a combination of edge and cloud processing. It is likely that both these would have to be present to some extent at least, to promote the cloud providers’ interest in the edge, particularly given that there are no true “edge-as-a-service” offerings, so local-edge, customer-owned, technology will have to do the hosting.
There are obviously a lot of enterprises with public cloud commitments, but what exactly are they committed to? Today’s public cloud applications aren’t particularly event-oriented, and if events are the real future of the edge, then what’s already running in the cloud isn’t especially applicable to the emerging edge missions. Would enterprises adopt cloud tools they’d never used before, in local-edge applications they develop?
The “competing approaches” counterpoint’s credibility depends on the same question, of course. If enterprises have to develop their own local-edge software, they could in theory use software tools not specific to the cloud. After all, they’re developing local applications. In these cases, edge strategies would drift away from the cloud model, and we’d have a greater chance of what my opening blog in this series called the “local edge”, without any specific link to cloud provider tools.
The greatest mission driver for a cloud-centric edge is where event sources are highly distributed and/or highly mobile. This would make it difficult to justify premises edge hosting at all, and it’s also true that some of these distributed/mobile edge applications aren’t so latency sensitive as to require “edge” at all; the events could be fielded by the cloud. I worked with a transportation-industry player who had specialized cellular connection to its carriers, to report conditions like out-of-range temperature, impacts, or openings. These don’t require low latency; they’re alert rather than control applications. However, they also had some applications that involved traditional control loops, so their processing mixed cloud and edge-as-a-service requirements.
The cloud also has great potential for event-based applications that require coordination of events from different sources, different locations, or both. Where event sources are highly distributed and where process elements are therefore distributable, the cloud is the logical place to connect the edge dots, however edge hosting is accomplished. That would at least open the prospect of using a common platform for development of cloud and edge applications.
One area that could drive a new level of symbiosis between cloud and edge is artificial intelligence and machine learning. I noted in my first edge blog in this series that most transformational edge applications were likely to involve creating a synchronization between the real world and a “digital twin” created by an application. AI/ML could be extraordinarily valuable not only in creating that digital twin, but in creating insights from it. Even where the event sources are fairly concentrated, cloud AI/ML tools are more accessible than premises-hosted tools, according to enterprises, and the use of cloud tools could permit AI/ML elements to migrate edge-ward if edge-as-a-service is available or if cloud provider tools were extended to the premises via the cloud providers’ private edge initiatives.
All this is good for public cloud providers, of course, and it ties in with their partnerships with the telcos, too. One thing that the cloud providers realize is that their efforts to create cloud/edge symbiosis will likely accelerate interest in edge-as-a-service, and that could create a risk for them if they’re not prepared to exploit the interest. Rather than buy real estate in every “significant” metro area (the number of these varies depending on who estimates, but my figure is 3,100 for the US), why not partner with somebody who has the real estate to start with, meaning telcos?
The problem with this approach is that these partnerships are driven, from the telco perspective, by interest in having cloud providers host 5G elements. The requirements for this have little or no relationship to the requirements for generalized edge computing applications, which means that there is little chance that middleware tools developed for the 5G mission will have any value down the road, and that raises the obvious question of where tools for those general edge missions will come from. Are they already part of the cloud?
No, at least not IMHO, they are not. It’s hard to prove that without an example, so let me propose the following. If edge computing is driven by events, and if the largest number of event-based applications involve the creation of a digital twin of elements of the real world, then digital twinning is the primary feature required for whatever edge applications emerge. Cloud providers don’t offer anything in particular in that area today, and that is their greatest risk.
Digital twinning would use events to feed into an AI/ML framework that would then create a kind of lightweight abstraction representing the real-world target. That framework would then be used to draw insights about the current and future behavior of the target, and perhaps to invoke control responses to change that behavior. It’s not an application as much as a middleware suite that would then be used to build applications. Think of it as a modeling process.
The fact that cloud providers don’t offer specific tools to facilitate edge-enabling applications doesn’t mean that cloud and edge aren’t intertwined, only that the relationship isn’t optimized today. That, in turn, means that others could seize control of the edge by creating the tools that would facilitate those edge-enabling applications, like my digital twinning example. The question is whether any others will.
It seems pretty clear that the network operators aren’t going to transform edge-think. Some, like AT&T, seem to have at least some handle on the potential of edge computing, and even on how the edge dynamic could develop. However, even AT&T is looking at alliances with cloud providers to realize an edge vision, and some of their deals have ceded AT&T projects and staff to those cloud providers. That suggests they don’t intend to push things on their own, and the reason is probably the operators’ lack of cloud expertise. Things like NFV, the operators’ vision for software hosting of network features, don’t have any real role in edge computing.
IT vendors could be another source of edge power. Dell, HPE, IBM/Red Hat, and VMware are all potential giants in the edge space because it is likely that early edge deployments will dominantly be on premises rather than as-a-service. An edge model designed for premises hosting could be easily translated to a cloud deployment on containers or VMs, and that means that these vendors would be the only likely source of an edge platform architecture that didn’t mirror some public cloud provider’s tools.
The deciding point here, I think, is 5G. 5G is the only “application” of edge computing that is clearly funded and deploying, in the form of edge hosting of 5G elements. The network operators’ relationship to the public cloud providers almost guarantees that 5G will be deployed on the public cloud, and that may be due in large part to the fact that those IT vendors I’ve cited haven’t created a compelling 5G story.
Since they’ve missed 5G already, the IT vendors’ only shot at edge dominance is IoT-related missions like digital twinning, which those vendors have also so far ignored. Unless that changes during 2021, I think attempts to wrestle the edge architecture from the public cloud are unlikely to succeed, and we can say that the edge of the future is an extension of the cloud of the present, but a future extension that’s not yet been realized or even suggested by the cloud providers themselves.