Six Data Points on O-RAN

In a single day last week, I saw six news items that demonstrate the market, business, and technology challenge that’s posed by 5G. None of this tumult means that 5G isn’t going to happen, but it seems to me to demonstrate that we’re not yet really sure how it’s going to happen. The who-wins and what’s-offered pieces are still very much up in the air, which of course means that vendors face a period of unparalleled 5G opportunity and risk.

On the service provider side, we have an interesting mixture. India is likely postponing their 5G auction but approved 5G trials. This was done to allow bidders to arrange the financing of spectrum deals that were expected to be costly, as they’ve been in other markets (recently, the US). Then Malaysia invited bidding for their planned “wholesale 5G” network, one of the first examples of a government attempt to create a 5G service model similar to the wireline-focused NBN in Australia. Obviously, there’s a fear that reliance on traditional carrier-funded deployments of 5G might disadvantage the country.

The point here is that 5G isn’t free for providers, so if consumers are unwilling to pay more for it, deployment is a financial balancing act. In countries were cellular services are highly competitive and where the total addressable market is reasonable, operators are able to stomach the cost because network expansion via 5G technology is essential in preserving their market credibility. Elsewhere, things aren’t as rosy, but even where 5G is budgeted, there’s an awareness that the cost might put operators under financial stress for years.

This is the principle driver behind the interest in open-model 5G in general, and O-RAN in particular. Sure, O-RAN supporters may have the same hopes for incremental 5G revenue from 5G business services, network slicing, and the ever-popular sell-5G-to-things IoT space, but they recognize that they can’t count on any of that to build profit margins on the service, so cost reduction has to be the rule.

There are a lot of possible ways that open-model 5G could reduce cost and risk for operators, and Dish Networks has focused on one in particular, the hosting of 5G components in the public cloud, specifically Amazon’s AWS. This reveals a risk and cost source that’s especially problematic for 5G operators who don’t have extensive wireline facilities to tap for locating servers. 5G needs hosting, and in particular what would be considered “edge hosting” for real-time support of some service functions. Dish doesn’t have the real estate and doesn’t want to make the investment in data centers, so grabbing cloud provider support is a good choice.

An open-model counterpoint is T-Mobile, who is a very aggressive 5G provider with aggressive expansion plans to other spectrum bands. Their CTO has been a bit skeptical about O-RAN, and because T-Mobile/Sprint has traditional 4G technology in place, they’re more inclined to expand on current technology and vendor commitments. Verizon and AT&T have taken the other tack, with at least significant O-RAN interest, but for established wireless network operators, it’s obvious that O-RAN isn’t an automatic winner.

T-Mobile’s fear is based largely on the “not-ready-for-prime-time” risk. Telcos have been traditionally slow to accept open-source technology (I was part of an initiative in the TMF to promote open-source to the telco community, showing that it needed promotion to succeed there), and new open-source technology is even harder to stomach. However, T-Mobile’s decision was made last year, when there were no large and highly credible O-RAN providers. We have O-RAN commitments now from VMware (who I think has the best story) to IBM/Red Hat, Microsoft/Metaswitch, Mavenir, and more.

Best-of-breed thinking seems to be fading in the O-RAN space. A big solution source is important because telcos hate to have to do integration, and hate to have to deal with finger-pointing when a problem occurs. These two issues are in fact the largest barriers to a near-term O-RAN commitment, but they’re not the only barriers. We have an unsettling amount of telco-think baked into 5G in general, and into O-RAN.

O-RAN, like 5G, is explicitly linked with ETSI NFV, and that could be highly problematic for a number of reasons. Top of the list is that O-RAN is really (as I’ve suggested above) an edge computing application, which is a cloud application. NFV isn’t a cloud-centric specification, it’s a network-centric specification, and it was designed from the first to replace physical network functions (boxes) with virtual network functions. O-RAN never presumed boxes, nor does 5G, so IMHO the right move would be to link to cloud specifications and practices, not NFV.

NFV was also dominated, early on, by per-customer services. If you’re a box replacement strategy, it makes sense to focus where there are a lot of boxes, and so many of the early proof-of-concept initiatives in NFV were aimed at business services and the replacement of CPE by virtual CPE. 5G is a multi-tenant infrastructure, and the majority of users aren’t business users. Cloud computing’s technology has advanced in no small part because of social-media contributions to software design, which are targeted at the consumer market.

Right now, O-RAN and 5G are hampered by the fact that operators and some vendors are trying to validate funding and staffing a decade-long slog to NFV specifications even when the goal isn’t worth the effort. Interestingly, there have been some examples of a technical end-run around the logjam of NFV. The general edge computing opportunity is at the heart of this.

Edge computing is a form of the cloud in the sense that it’s a resource pool. However, it’s a local resource pool by definition, since “edge” means “close-to-the-user”. It’s likely that for a given geography, “edge” resources wouldn’t be concentrated into a giant cloud data center, but distributed across a lot of locations. Early on, many believed that virtual-machine technology wasn’t the right answer for this, and even NFV diddled with “containerized network functions” or CNFs. That would imply Kubernetes for lifecycle automation rather than NFV’s MANO and NFVI, and Kubernetes may need tweaking for that to be practical.

I think we’re still behind the duck with this question, though. The more important consideration in edge computing is how to frame the platform software so that it supports not only O-RAN and 5G Core, but also other future edge-justifying applications, like the mysterious and venerable IoT. The edge, for all these applications, has to be very cloud-like, and NFV is not that, nor is it ever likely to be that. It’s critical, if O-RAN is going to be the on-ramp to edge computing, that O-RAN promotes the right platform software to support edge opportunities in general. That means it should be not only cloud-centric, but cloud-native. More on that in a blog later this week.

The hope that we could somehow synthesize a workable 5G model for the edge out of all of this seems a bit tenuous at this point, but it gets worse. The media has discovered 6G (with an assist from some vendors), and while it’s easy to dismiss this as another press excursion into neverland, the truth is that there is preliminary work being done on 6G, and that could be a major problem. Standards in 2026 and deployment in 2028? How long has 5G taken, gang? The reason this is a problem is that we are surely at least two or three years from truly effective full-scale 5G implementations even today, and if we were to believe that by 2026 the next generation would be here (on paper), why bother?

At some point, realism has to win over publicity, or wireless, 5G, 6G, and edge computing could all be threatened. Or maybe everyone will just turn their back on telecom standards and assume they’ll cobble together networks from cloud technology. Well, they could surely do worse.