Everyone is fascinated by the relationship between 5G and edge computing. What’s perhaps the most fascinating thing about it is that we don’t really have much of a notion of how that relationship works, or even if it’s real. 5G is a technological evolution of wireless. Edge computing is a technological evolution of the cloud. All technology evolutions have to pass the benefit-sniff test, and at this moment it’s not clear if either 5G or edge computing can do that. It’s not clear if they’re explicitly symbiotic either. The good news is that they could be, and that both could be wildly successful if done right.
I used to read “Popular Science” as a boy, and I can remember an article on how nuclear weapons would be used to create huge underground tanks to hold fuel, water, and other stuff. It was feasible, it was novel and interesting, but obviously it didn’t pass the benefit-sniff test because we don’t do it and never did. The point is that it’s interesting and even fun to speculate about the way technology, networking, will change our lives, but to make it real in any given timeframe, we have to make it worthwhile financially. We have to be sure that 5G and edge computing aren’t building nuclear-crater tanks.
A recent Wall Street report says that “A key factor in any 5G debate is the ability to support low-latency applications”, which is their justification for saying that 5G will re-architect the metro network to increase fiber capacity and reduce hops and will promote edge placement of computing. The only thing that separates this from nuclear craters is the presumption that low-latency applications do have immediate credible benefits to reap, benefits to offset costs and risks.
Low-latency applications really mean IoT or other event-driven applications. There’s certainly credibility for these applications, but there is still no real progress on creating meaningful opportunities that would justify a whole new wireless infrastructure. Can we say what might? Sure, just like I could have listed possible liquids (or even solids) that could be stored in a nuclear-bomb-built container. The question isn’t possible application, it’s possible high-ROI application.
That’s the big problem for those who say that 5G will drive NFV or edge computing or IoT or whatever. 5G has applications to those things, which means that 5G could benefit if they happened on their own, and also that each of the things could benefit if 5G happened on its own. However, linking two suppositional things doesn’t create certainty. Something has to start the ball rolling, and there’s a big barrier that something has to cross.
The biggest barrier to the unity of 5G and edge computing for low-latency applications is managing the “first cost”. Do operators spend hundreds of billions of dollars to deploy this wonderful mixture, and then sit back and hope somebody uses it and pays enough to make it revenue- and cash-flow-positive? We all know that’s not going to happen, so there will have to be a credible service revenue return, and an acceptable risk in terms of first cost.
The logical pathway to achieving that goal depends a bit on geography, but it centers on what’s called “Non-Standalone” or NSA. This is an overlay of 5G New Radio (NR) on 4G IMS/EPC, which means it doesn’t have any of the 5G Core features like slicing. What this will do is let a couple of new 5G frequencies be opened, 5G handsets to be used, and a transition to 5G be created. A somewhat similar model applies 5G millimeter wave to extend fiber-to-the-node as a means of providing cheaper broadband provisioning in urban and suburban areas. That justifies new services and frequencies, but it’s not as much an on-ramp to full 5G as it is an on-ramp to IP streaming over linear TV.
The reason I’m linking these two is that as far as edge computing is concerned, 5G is only a potential driver (like pretty much everything else). There are five others, as I said in a previous blog, but the most compelling in the near term is a combination of the linear-TV to streaming IP video transition, and the monetization of advanced video and advertising features that arise from that transition. Both the 5G NSA and mm-wave options are possible drivers for the streaming transformation, which makes it more important than 5G alone.
Internet video is more dependent on caches, meaning content delivery networks (CDNs) than it is on Internet bandwidth inside the access network. Streaming live video has to be managed differently because live viewing creates something that on-demand streaming doesn’t—a high probability for coincident viewing of the same material. You can also predict, from viewer behavior, just how many users will likely stream a given show and where they’ll be when they do it. Thus, it requires improved caching technology. Most operators say that mobile video, as opposed to wireline, is less likely to be real-time streaming, but mobile video still needs caching control to accommodate the movement of the viewer through cells. This kind of caching differs from traditional commercial CDN caching in that it’s inside the operator network rather than connected at the edge.
Streaming video encourages ad customization, and in fact AT&T mentioned the potential of improved ad targeting for revenue gains during its most recent earnings call. Ad customization means not only better ad caching to avoid disruptions in viewing when switching from program to ad, or between ads, but also logic to serve the optimum ad for the viewer. Combine this with the program caching issues of the last paragraph and you have the most credible near-term opportunity for edge computing.
What this means to me is that 5G, which for most reports means “5G Core” features like slicing, isn’t going to be a near-term driver for edge computing because it’s not going to happen in the near term. Operators will not deploy edge computing now to support a possible 5G Core deployment in 2021 or 2022. They would deploy it to support enhanced video caching and ad support to accommodate a shift from linear TV to streaming live IP video.
I also believe that IoT-driven 5G deployments, also often cited by supporters of the “5G drives the edge” strategy, are unlikely to impact edge computing in the near term, but have a better shot than 5G Core. If 5G NR happens, and if there were to be a major onrush of new 5G-connected IoT elements, then you’d have a set of low-latency applications. That’s two “ifs”, the first of which is credible if we define “near-term” as 2020, and the second of which has that now-familiar business-case and first-cost problem.
Edge computing can only be driven by things like streaming video or IoT. Of those two, only the streaming video shift has any promise of delivering compelling benefits in the next two years. 5G in the form of the 5G/FTTN hybrid could drive streaming video, and that would then drive edge computing. If we narrow our scope to say that 5G/FTTN, followed by 5G NSA, could make live TV a streaming-video lake, then we have created a compelling link to edge computing. Without the streaming-video presumption, though, 5G is not going to get us to edge computing for at least four years, and even then it’s a guessing game based on whether IoT ever finds a realistic business model. Till then, we’re digging hypothetical nuclear holes in the ground.