One of my recent blogs on 5G generated enough LinkedIn buzz to demonstrate that the question of 5G and hype is important, and that there are different interpretations to what constitutes 5G success. To me, that means I’ve not explained my position as well as I could have, which means I need to take a stab at the issue again, specifically addressing a key point of mine.
My basic position on issues relating to 5G (or any other technology) is that there is a major difference between what you can use a technology for, and what justifies the technology. As I said in the referenced blog, there is not now, nor has there ever been, any realistic chance that 5G would not deploy. It’s a logical generational evolution to mobile network technology, designed to accommodate the growing and evolving market. In fact, one of the most important facts about 5G is that it will deploy, which means that having a connection to it offers vendors an inroad in something that’s budgeted. This, at a time when budget constraints for network operator spending are an ongoing problem to vendors.
The question with 5G, then, isn’t whether it will happen, but rather what will drive it, and how far the driver(s) will take it. Putting this in very simple terms, we have two polar positions we could cite. The first is that 5G is really nothing more than the evolution of LTE to 5G New Radio (NR), and that little or no real impact can be expected beyond the RAN. This is the “Non Stand-Alone” or NSA vision; 5G rides on what’s an evolved/expanded form of 4G Evolved Packet Core (EPC). The second is that 5G concepts, contained in 5G’s own Core, will end up transforming not only mobile networks but even wireline infrastructure, particularly the access/metro networks. Obviously, we could fall into either extreme position or something in between.
Where we end up on my scale of Radio-to-Everything-Impacted will depend not on what you could do with 5G, but on what incremental benefit to operator profits 5G could create. If 5G offered a lot of really new applications that would justify additional spending on 5G services, and in particular if operators could expect some of those new applications to be services they’d offer and get revenue from, then 5G gets pushed toward the “Everything” impact side of my scale. If 5G could offer a significant improvement in opex overall, then it would bet pushed toward “Everything” as far as the scope of improvements justified. If neither happens, then 5G stays close to the “Radio” side of the scale, because there’s no ROI to move the needle.
If 5G does in fact end up meaning little more than a higher-capacity, faster, RAN, it doesn’t mean that 5G core would not deploy, but it would mean that the features of 5G Core that were actually used, and could actually differentiate one 5G network (or vendor product) from another would be of less value, and be less differentiating. In fact, they might not even be offered as part of a service at all, in which case there would be no chance the market could eventually figure out how to build applications/services that would move my needle toward the “Everything” end of the scale.
My view of the possible drivers to move 5G toward the “Everything” end of the scale has been that they relate to applications of 5G beyond calling, texting, and simple Internet access. That, to me, means that there has to be a set of service features that are valuable to users, deliverable to a community of devices, and profitable for the operators to deploy. I doubt that anyone believes that something that met these requirements could be anything but software-based, and so I believe that exploiting 5G means developing software. Software has to 1) run somewhere, and 2) leverage some easy (low-on-my-scale) property of 5G to exploit low-apple opportunities and get something going.
Software that’s designed to be edge-hosted seems to fit these criteria. One of 5G’s properties is lower latency at the radio-connection level, which is meaningful if you can pair it with low latency in connecting to the hosting point for the software, the edge. Further, 5G itself mandates function hosting, which means that it would presumably justify some deployment of edge hosting resources, and those might be exploitable for other 5G services/features/applications. However, that’s less likely to be true if the software architecture, the middleware if you like, deployed to support 5G hosting doesn’t work well for general feature hosting. 5G can drive its own edge, but it has to be designed to drive a general edge to really move my needle.
There’s been no shortage of 5G missions cited that would drive 5G. Autonomous vehicles are one, robots and robotic surgery are another. All of this reminds me of the old days of ISDN, when “medical imaging” was the killer app (that, it turns out, killed only itself). All these hypothetical 5G applications have two basic problems. First, they require a significant parallel deployment of technology besides 5G, and so have a very complicated business case. Second, it’s difficult to frame a business model for them in any quantity at all.
If anyone believes that self-driving cars would rely on a network-connected driving intelligence to avoid hitting pedestrians or each other, I’d gently suggest they disabuse themselves of that thought. Collision avoidance is an onboard function, as we have already seen, and it’s the low-latency piece of driving. What’s left for the network is more traffic management and route management, which could be handled as public cloud applications.
Robots and robotic surgery fit a similar model, in my view. The latency-critical piece of robotics would surely be onboarded to the robot, as it is today. Would robotic surgery, done by a surgeon distant from the patient, be easily accepted by patients, surgeons, and insurance companies? And even if it were, how many network-connected robotic surgeries would be needed to create a business case for a global network change?
Why have we focused on 5G “drivers” that have little or no objective chance of actually driving 5G anywhere? Part of it is that it’s hard to make news, and get clicks, with dry technology stories. Something with user impact is much better. But why focus on user impacts that aren’t real? In part, because what could be real is going to require a big implementation task that ends up with another of those dry technology stories. In part, because the real applications can’t be called upon for quick impact because they do require big implementation tasks, and vendors and operators want instant gratification.
How do we get out of this mess? Two possible routes exist. First, network operators could create new services, composing them from edge-hosted features, and target service areas that would be symbiotic with full 5G NR and Core. Second, edge-computing aspirants could frame a software model that would facilitate the development of these applications by OTTs.
The first option, which is the “carrier cloud” strategy, would be best financially for operators, but the recent relationships between operators and public cloud providers demonstrates that operators aren’t going to drive the bus themselves here. Whether it’s because of a lack of cloud-skills or a desire to control “first cost” for carrier cloud, they’re not going to do it, right or wrong though the decision might be.
The second option is the only option by default, then, and it raises two of its own questions. The first is who does the heavy lifting on the software model, and the second is just what capabilities the model includes. The answers to the two questions may end up being tightly coupled.
If we go back to the Internet as an example of a technology revolution created by a new service, we see that until Tim Berners-Lee, in 1990, defined the HTML/HTTP combination that created the Worldwide Web, we had nothing world-shaking. A toolkit opened an information service opportunity. Imagine what would have happened if every website and content source had to invent their own architecture. We’d need a different client for everything we wanted to access. Unified tools are important.
Relevant tools are also important. Berners-Lee was solving a problem, not creating an abstract framework, and so his solution was relevant as soon as the problem was, which was immediately. The biggest problem with our habit of creating specious drivers for 5G is that it delays considering what real drivers might be, or at least what they might have in common.
Public cloud giants Amazon, Google, and Microsoft have a track record of building “middleware” in the form of web-service APIs, to support both specific application types (IoT) and generalized application requirements (event processing). So do software giants like IBM/Red Hat, Dell, VMware, HPE, and more. Arguably, the offerings of the cloud providers are better today, more cohesive, and of course “the edge” is almost certainly a special case of “the cloud”. There’s a better chance the cloud providers will win this point.
The thing that relates the two questions of “who” and “what” is the fact that we don’t have a solid answer to the “what”. I have proposed that the largest number of edge and/or 5G apps would fit what I call a contextual computing model. Contextual computing says that we have a general need to integrate services into real-world activity, meaning that applications have to model real-world systems and be aware of the context of things. I’ve called this a “digital twin” process. However, I don’t get to define the industry, only to suggest things that could perhaps define it. If we could get some definition of the basic framework of edge applications, we could create tools that took developers closer to the ultimate application missions with less work. Focus innovation on what can be done, not on the details of how to do it.
And that’s the 5G story, IMHO. 5G proponents can either wait and hope, or try to induce action from a credible player. I always felt that doing something was likely a better choice than hoping others do something, so I’m endorsing the “action” path, and that’s the goal of sharing my 5G thoughts with you.