The battle for public cloud supremacy is far from over, but it may be taking a new direction. Amazon has announced its Snowball edge appliance, which some characterize as an extension of the early Greengrass technology that let users run AWS elements on premises, meaning at the edge. This comes as Microsoft is expected to announce significant Azure gains, which some on the Street estimate could approach 100% year over year, based in part by the symbiosis between Azure and on-premises Windows servers. The IEEE has adopted the OpenFog Consortium’s reference architecture for edge computing, and Gartner says that in four years, 75% of enterprise data will be processed at the edge, not in the cloud. Is an edge revolution happening?
Some of this is overblown, of course. First, as I’ll explain, Snowball and Greengrass are really aimed at different stuff. Second, if that 75% of data that’s edge-processed includes work handled on the premises, then current premises data centers and desktops are the edge too, and we’re already well over that 75%. If Gartner means that 75% of data will be processed by fog computing or Snowball-like cloud extension, good luck with that one. Clearly we need to explore what “edge computing” is, and even what the “edge” is, specifically.
The cloud computing market has been blundering through a series of misunderstandings from the first. There was never any chance that everything was going to migrate to public cloud services. Our model has consistently placed the maximum percentage of current IT applications migrating to public cloud at around 24%. At the same time, though, our model said that new applications written for the cloud could almost double IT spending. That means that the total new cloud opportunity would roughly equal what’s currently spent on legacy IT missions.
Enterprises told us, starting in about 2012, that their cloud plans were to build extensions to current business applications out from “on-ramps” to these current applications, into the cloud. These extensions would enhance worker productivity, improve inventory management and cash flow, and serve other recognized business needs.
Cloud front-ends to existing applications may involve data-sharing between the premises and the cloud, which is what Snowball is really for. It lets Amazon customers move large amounts of data back and forth, something that’s essential for both scaling and redeployments associated with cloud backup and for front-end transaction editing. Amazon with Snowball is targeting the enterprises that, so far, Microsoft has been winning with Azure.
I think both Amazon and Microsoft believe that they will find it difficult to make a business case for deploying their cloud services in their own edge-hosting facilities. Snowball and Greengrass show that Amazon has no intention of deploying edge hosting in the near term at least, and Microsoft has created its equivalent premises-edge model by linking Azure to Windows. The problem with edge hosting is the economics. An “edge” that’s not close to its user base isn’t an edge at all, and so edge deployment means pushing computing out close to every customer not just to a couple. That demands a lot of real estate and also reduces the utilization efficiency of the servers.
Network operators already have the real estate, of course, but they are hardly powerhouses in the cloud computing space. I asked a Tier One when they thought they’d deploy edge computing resources to support public cloud services, and they responded “When there’s a significant demand for them.” I asked how that demand could develop, absent any facilities to realize it, and they shrugged.
The problem operators face is that in 2022, the end of Gartner’s four years, the opportunity for network operators to justify data centers through public cloud services is declining as a percentage of total data center opportunities. Not only that, my model says that it would be 2023 or 2024 before the likely realization of all the network operator data center deployment opportunities combined could build out enough edge density to offer widespread edge computing.
That’s why Snowball is important. I would argue that the opportunity Amazon is targeting with Snowball is the one enterprises are already doing, which is making the cloud into a front-end extension to transaction processing. That can require access to a lot of data that, if it were accessed on premises, would run up transfer charges and introduce delays. Greengrass enables the premises edge, while Snowball doubles down on the cloud as a front-end. Amazon’s Snowball may acknowledge that enterprise edge computing isn’t exactly looming.
The premises connecting edge computing to events is simple; shorten the control loop. The greater the distance between an event source and the event processing, the greater the latency injected into the round-trip between event and response. There are obviously events that require quick action; most of those associated with process control come to mind, and then there’s the ever-popular self-driving cars and self-regulating intersections. The problem is that these events require a chain of future application decisions, each of which have technical, policy, and financial issues to deal with. Finding the magic event-and-edge formula for enterprises isn’t going to be easy, or quickly accomplished.
The most likely source of “events” that would require edge processing are actually related to things like consumer streaming video and ad delivery. Personalization and contextualization are generally related to a combination of where a person is and what they’re doing, which can be event-based. Since ad-related and video-related caching is also an edge function, it looks to me like edge computing will have to be driven by these applications, and any exploitation of the edge by business applications will have to either exploit “edges” deployed on the customer premises, or in-place edge resources justified another way.
The good news is that it’s easier to justify edge computing based on a small number of related applications and a limited number of users who’d have to deploy it. Video/add-related services are operator-targeted, and a solid hundred operators who bought into the notion would drive over 80% of all edge opportunity.
The bad news is that even with edge-hosting resources in place (justified by another event mission), we still have to deal with the question of linking event processing and transaction processing in a way that delivers clear benefits. Both Amazon and Microsoft have been working on that, but optimal event computing means what’s now being called “Function-as-a-Service”, and it’s a different style of programming and application workflow management. A clear model has to emerge, and then be applied to each application/vertical opportunity.
Events will eventually drive the cloud, and will eventually create edge/fog computing, but the applications that drive those events aren’t ones enterprises will run. We’ll have to stop looking for event-killer-apps among the business applications and focus instead on the consumer space. That’s what’s going to get those edge data centers built.