Geography, Demography, and Broadband Reality

One size, we know both from lore and from experience, doesn’t fit all. The same is true for access technologies. We’re reading stories about the rise of fiber, like this one that quotes Consolidated Communications saying “There are some mobile or temporary use cases where FWA is best, he says, but for the majority of customers, fiber is more cost-effective for Consolidated to deploy.” Just how true is that across an entire national service geography?

There are also stories like this one, saying that broadband in the US is worse, and more expensive, than it is in other countries. Many have a problem understanding why that would be the case, given the level of tech commitment in the US. Many wonder why we have, as the story suggests, 7% of the population who don’t have access to reliable broadband. I think the two questions I’ve asked here have a related answer.

Suppose we take a mile of fiber and air-drop it arbitrarily on some occupied landscape. In an urban area, that fiber could well pass hundreds of thousands of potential customers, and in a rural area it might miss everyone. The return on infrastructure investment would be high in the first case, and zero in the second. That alone says that there is no single answer to the question “What’s the best broadband technology to empower a given country.”

I’ve used “demand density” for decades to measure just how profitable broadband access would be, overall, for a country. Demand density explains how Singapore or Korea have such great, and inexpensive, broadband, compared to countries like the US, Canada, and Australia. Among a dozen sample countries, demand densities vary by a factor of 35 times. That mile of access fiber passes a lot more people in come countries than another! But most countries have multiple access providers, and many of those serve limited geographies rather than the country overall. What does breaking down a country do to our calculations?

Obviously, you could calculate demand density for any geography where the underlying data was available, which includes things like GDP, occupied area size, and road miles. I’ve done that for the US market, for each state and for AT&T and Verizon’s territories. AT&T serves a more rural territory, and that shows in their demand density, which is a seventh that of Verizon. That explains why Verizon has been pretty aggressive in Fios deployment, relative to its main competitor. On a state basis, things are even more dramatic; the highest state has almost 250 times the demand density of the lowest.

Returning to our fiber air-drop, we can see that for any given country, and any given operator territory within it, there would be a huge variation in demand density depending on where our fiber landed. That variation would be reflected in the business case for fiber access, or any other access technology. The more the variation, the less likely that something like universal fiber would be the best choice for the operator.

Another interesting point is that if you dig down even deeper, you find that almost every country has small areas, postal-zone-size, that have very high demand densities. Among industrial economies, the demand density of these high-density areas are fairly consistent and all are sufficient to justify fiber access. Similarly, they all have low-density areas where anything other than an RF technology is unlikely to be a practical broadband option.

This is important because how customers are distributed across the spectrum of demand density within an operator geography sets some policy constraints on the operators’ broadband planning. Sometimes regulators impose a mandate for consistent broadband, and sometimes it’s a matter of customer response. Would you want to offer super-fast, inexpensive, broadband to a fifth of your market, with the rest having substandard service that might actually cost more? Digital divides are more than abstract when your own deployment plans create and sustain them.

When we hear stories like the one I cited, it’s easy to extrapolate the fiber success to an operator’s entire geography, or even to a whole country (like the US, in this case). That’s a big mistake. When we consider issues like public policy relating to universal broadband at a given minimum speed or with a specific technology, that same kind of extension is another big mistake. One operator told me that they have customers whose 1 GB Internet connection would require running five miles or more of fiber to those customers alone. Will taxpayers consent to subsidize that kind of cost?

If we set the overall US demand density as 1.0, then my modeling suggests that where demand densities are roughly 5 or 6, you can make fiber/PON work on a decent scale. There are 12 states where that’s true. If we’re talking about 5G/FTTN hybrid broadband, a demand density greater than 2 would work, and roughly half the states could make that work on a large scale. With cellular broadband using mobile 5G technology, 47 states could provide decent service to a large percentage of the population.

These numbers show us a lot about what’s happened, and what’s happening, in the broadband access space. Verizon jumped ahead in fiber deployment because their geography generated a higher ROI. Countries like Japan, Korea, and much of Europe are “ahead” of the US in broadband for the same reason. Google targeted only select areas with their fiber, because those areas had decent demand density and the operator incumbent(s) in the geography, serving a larger area that included Google’s target, couldn’t do fiber through enough of that larger area to make it a wise decision.

Even municipal broadband and municipal partnerships with operators to deploy fiber can be explained here. A city has little pressure to deny its residents and businesses fiber broadband because they can’t provide that same service to other cities. Fiber broadband to high-demand-density pockets is likely to come about increasingly because technology improvements make cherry-picking high-demand-density areas profitable, as long as it doesn’t generate backlash from customers of the broadband provider who live outside those areas. And, of course, eventually our big telcos and cable companies are likely to see that new source of competition and take the risk of creating have-not customer angst.

Then there’s the competitive impact of 5G in any form. T-Mobile has just cut the price of its 5G-based wireline broadband alternative by about a quarter, and mobile operators could in theory use 5G to compete with the fixed broadband offerings of other operators. Wireless doesn’t require you trench vast quantities of fiber or cable in another territory; all you need is some nodes and/or towers, which of course mobile operators already have.

Consolidated may be right that for most of their customers, fiber is best, but that’s not the case for most customers overall. Fiber will be preferred where demand density is high, but it’s likely that 5G will be more transformational in fixed broadband applications because it’s more versatile across a wide range of demand densities. Operators with limited geographies may be able to deploy fiber to nearly all their customers, and niche fiber plays will surely spring up where densities are high, but universal broadband needs a solution we can apply broadly, and all the wishful thinking and hopeful publicity in the world isn’t going to turn fiber into that solution.

Some Research Says Private 5G is Going to Explode; True?

Private 5G is even more difficult to assess, in value-proposition terms, than 5G overall. My own chats with enterprises suggest that there’s really not all that much going on. Other sources suggest otherwise, and are looking at possible “private 5G models”. TelecomTV has a story on this, and we’ll use that today to look a bit harder at the private space. We’ll also roll in initiatives like those of the ONF with Ananki and Rakuten’s own Communications Platform.

5G is a 3GPP standard for next-gen mobile networking. Semantically speaking, anything that claims to be 5G would have to conform to that standard, but the standard would allow for two broad 5G subdivisions—public 5G service offered by mobile operators, and private 5G services deployed by users. If you think of 5G at this level as having a sort-of-cloud-like framework, you could also see “hybrid 5G”, which would be a combination of the two. That combination might hybridize with a public 5G service, or with a network slice using that feature of 5G.

There are also mission-based private 5G models. Again, at the highest level, there are “industrial” or “IoT” applications, and then applications designed to offer communications through traditional mobile devices. I’ve not seen very much interest in the traditional-device model so far, but then (as I’ve noted) I’m not seeing a lot of private 5G adoption either, just tire-kicking.

We could also divide private 5G according to the means of implementation, and there are four broad options. First, an enterprise could partner with one of the mobile infrastructure vendors (Ericsson or Nokia, for example), or they could acquire their infrastructure from one of the open-model 5G providers. Finally, they could acquire the technology from an integrator, or they could do their own 5G integration from available elements.

The TelecomTV article I cited references third-party research, and I’ve pointed out in the past that asking enterprises things like “do you plan to adopt private 5G” nearly always yields little in the way of accurate results. I think that’s the case here. The lead graphic says, for example, that the top industrial connectivity options are WiFi, fiber, Ethernet, private LTE, and SD-WAN, with each of the first four having over 60% adoption. That’s in no way consistent with my own findings, unless you broaden “industrial connectivity” to mean anything an industrial company uses, which would of course have little relevance to private 5G opportunity.

The assertion of the same chart is even more striking; over three-quarters of companies expect to adopt private 5G by 2024. Again, this is totally at odds with what I’ve heard from companies actually looking at industrial connectivity; over three-quarters of them said they had no idea when, or whether, they’d adopt private 5G.

What’s really going on with “connectivity in manufacturing” has to be divided to assess. Almost all enterprises have the same basic issues in their enterprise connectivity strategies. Manufacturing differences arise largely because of manufacturing, meaning industrial IoT. That topic divides twice—once by whether the elements are fixed or mobile, and again by whether the application is green- or brown-field. Each of these divisions impacts 5G value, whether public, private, or hybrid.

My (limited by the small number of actual adoptions/considerations of private 5G) contacts tell me that the biggest driver of private or hybrid 5G consideration is the need to support locally mobile IoT elements. If the application is also greenfield, then private 5G is very likely worth looking at. If some IoT elements could be mobile beyond the scope of a facility, then hybrid 5G is a viable option. Users offered the example of warehousing and transportation as an application space where this could happen.

Where IoT elements are fixed, the preference I get from my contacts remains for some form of wiring. True manufacturing/industrial IoT often bundles IoT elements and controllers with the tools involved in creating the product(s), and in most cases they say that the connections are made using wiring. Greenfield versions of these applications seem to follow the same wiring preference, but where something has to be added to a manufacturing/industrial process, wireless may be easier to integrate. There, 5G might be an option.

When does consideration of 5G turn into adoption? There’s some congruence between what the article suggests and what I hear, but not complete congruence. Security and latency are what I hear could be differentiators for private or hybrid 5G, but that’s true only where the alternative is either 4G/LTE or WiFi, since wired IoT is obviously the lowest latency of all. What’s interesting is that a slight majority of enterprises tell me that their applications wouldn’t demand 5G-level latency control, and a significant number didn’t even know what the latency difference would be.

WiFi 6 is another interesting factor. Only about a third of prospective 5G users distinguish between WiFi 6 and earlier versions, and only about a fifth know anything about the latency control available in WiFi 6. Even fewer know anything about WiFi 6 security. Given those points, it’s difficult to justify the assumption that three-quarters of enterprises with manufacturing connectivity needs would be adopting 5G; wouldn’t any serious movement in that direction compare obviously cheaper and easier-to-adopt alternatives?

This is where things like the Ananki/Rakuten announcements come in. Enterprises really want private 5G as a service or in a package. They’d love to get it from a vendor they’re familiar with, but an open approach is nearly as good according to what I’m hearing. Having easy access to a consumable private 5G offering could lower the barrier to adoption significantly, though I still believe that users will have to weigh the value proposition of private 5G for their applications before they move. Thus, I don’t think that even these recent private-5G offerings will create a private 5G avalanche.

Then why would users respond to a survey with what seems obviously incorrect or poorly thought-out comments? I’ve had many opportunities to assess survey responses, and it’s my experience that roughly a third of people surveyed will tell the agent what they believe they want to hear, or give the answer that makes them look smart and up-to-date. I’ve cited in past blogs the specific example of a third of enterprises telling a survey form they used 100 Mbps Ethernet when there was no commercial product available at the time. Survey firms also tend to go back to people they can get answers from, sometimes stretching the knowledge of the people if the new survey topic is out of their area.

The net is this. I see no indication of anywhere near the private or hybrid 5G interest that the article and survey suggests. OK, maybe my data is wrong, but I’ve gotten consistently right information from my contacts over years, often decades. I also see a lot of worrying inconsistencies in the data, places where it contradicts or conflicts with other information whose reliability is almost unquestionable, like sales.

There is a value to 5G, and to private 5G. That value, though, lies more in what applications it might enable than in what it’s going to do on its own. The difference is that things dependent on the enabling of other things are dependent on the pace that those dependent elements actually evolve. Many of those dependent elements are citing the success of 5G to pull them through. If A depends on B, which in turn depends on A, then we don’t have symbiosis, we have a circular argument.

That’s why we need to be realistic about what’s going to happen with 5G; the answer will be “not much” unless we take on the task of creating things that actually justify it.

Why Comcast’s Business Service Goals May Target the Wrong Businesses

Many broadband providers have traditionally offered both business and consumer services, but cable companies have often been strongly biased toward consumers because of their cable TV roots. Now, cable giant Comcast, having acquired Masergy, is looking for a stronger business position, even for enterprises. It’s a logical move, given that their telco competitors are all selling in the SMB and enterprise space, but cable companies like Comcast have some work to do, and targeting “enterprises” may not be the way to do it.

In the consumer space, cable companies generally had a lower level of customer satisfaction, and that’s true of Comcast, but Comcast has recently been named the most improved in that metric, and in some surveys they top all broadband providers. Issues with installation, with customer support, and with broadband performance and availability have all been reduced, according to the surveys. One might think that this kind of improvement, which measured over the last two years has been truly significant, would have changed cable, and Comcast’s, market numbers. Not so much. Comcast rode the broadband tide created by COVID lockdowns, and that tide is now receding. The company expects a slowdown in growth now, as work-from-home declines. No wonder business broadband is looking more interesting.

Comcast and other cable companies have long offered “business broadband” from essentially the same infrastructure that supports the consumer, but with enhanced customer care offerings. Most of my SMB contacts who use cable broadband are using it for business Internet access, email and texting, and other services that mirror what consumers use broadband for. The percentage who actually use a service like VPNs or SD-WAN is very small. My surveys don’t have a large SMB representation so statistical significance is hard to come by, but while they show almost 100% growth in VPN usage by cable customers, the 2021 numbers are low single-digit.

The reason this is important is that the ARPU for business broadband depends on how much it can be differentiated from consumer broadband. Businesses will (and do) pay more than consumers for “more reliable”, “more secure”, services, but not all that much more. Adding on other features, such as VPN/SD-WAN capability, is a great way to boost ARPU, but these features aren’t valuable or even useful to all businesses. Since differentiable features are also critical in sustaining pricing power in a competitive market, you can see how important it is that business broadband be more than re-branded consumer broadband.

Data from my surveys (with the same significance proviso) says that the reason given most often for selecting cable broadband was that no other provider was available. This suggests that the growth in business customers for cable companies may be due more to growth in the number of businesses that have come to depend on broadband. Cable companies have to get beyond that, but that means targeting businesses who would value something beyond. The demographics of these businesses offer us a hint of the cable company business service opportunity, because they can help us derive just how many businesses would be prospects.

Of the roughly 8 million business sites in the US, half have fewer than five employees. These have an average of 1.001 sites per business, meaning that they’re nearly all single-site businesses. While most businesses with over 500 employees have multiple sites, there are only roughly 20 thousand of them, representing an average of 66 sites per business. Drop down to businesses with between 100 and 499 employees and the average business has roughly 4 sites, and overall it appears that three-quarters of all businesses are single-site.

These single-site businesses tend to be prospects for consumer-like business broadband, of course. You’re a competitor if you pass the prospect with your infrastructure. Differentiable business broadband, broadband services with non-consumer features, would likely have to be targeted at the rough 120 thousand firms that actually have multiple sites, and this represents only about 2% of the business population. My surveys indicate that multi-site businesses tend to have all their sites in areas where there are several broadband providers available, which makes differentiable business services important for competitive reasons, too.

It’s my view, also based on my surveys, that the single characteristic that identifies a true “business broadband” prospect is that they have multiple sites and need connectivity among them. That would suggest that the Masergy deal Comcast made could have much broader impact than the “enterprise” market that the article cited above says the deal targets.

Just what defines an “enterprise” is slippery, so let’s look at the raw data again for guidance. The top group in US statistics, companies with 20 thousand or more employees, contains only 537 companies, with an average of about 1,100 sites per company. Many use 10 thousand employees as the boundary of an “enterprise”, and this group has just short of 700 average sites per company, but number just about 1,100. Those are obviously high-value prospects, but they’re solidly in the telco’s pocket and tossing the telcos out could be problematic. In addition, the goal of 100G service to an “enterprise” means a fiber network rather than CATV infrastructure, and telcos can build out fiber more easily because they have a lower ROI target than cable companies.

Comcast and cable companies need to think smaller, but not small. The sweet spot, according to my modeling of the data, would be companies between 1,000 and 5,000 employees, a group with an average of about 38 sites per company. My surveys have consistently suggested that the network features and issues users in that group found significant were pretty congruent with those of the top-tier (10,000 employees or more) enterprise companies. However, network literacy and the extent to which companies in the mid-sized group could acquire and retain network specialists, was much lower than for enterprises. That would make this group more likely to respond to a managed services offering built around a VPN and including other Masergy services. This group is also less likely wedded to a telco broadband offering.

I suggested in my blog about the Comcast/Masergy deal that Comcast may hope that being able to serve enterprise sites too small for MPLS VPNs could get their foot in the enterprise door. It could, but the enterprises could actually deploy SD-WAN rather than a managed SD-WAN service, and even if Comcast won the SD-WAN business, they’d still have to displace the MPLS VPNs, which is more than just fiber access. Mid-sized businesses with between 1,000 and 5,000 employees could be a better place to target, offering more sales and a higher return. An average of 38 sites per business could generate respectable business revenue, and managed service add-ons could improve ARPU.

Since most of the data I’ve cited here is public, it’s hard to believe Comcast doesn’t see this, and it may be that they are already planning a push like the one I’ve described. If they aren’t, they need to think seriously about taking a shot, and quickly.

Cloudflare, Cloud Market Barriers, and the Edge

According to the company, Cloudflare wants to be the “fourth major public cloud”, but at the same time EU cloud providers are losing market share steadily to the current three US giants. OK, bold aspirations make good ink, but there do seem to be some significant barriers rising in the public cloud market. Is that bad, and if so what can we do about it? Could the same forces that limit the entry of new cloud providers also impact edge computing?

The public cloud giants have been getting steadily bigger, which is usually attributed to their superior economy of scale. It’s more complicated than that. The efficiency of a cloud doesn’t continue to grow as the size of the cloud grows; it’s an Erlang curve that plateaus at a point that a new cloud provider could reach, at least in one data center. It is true that the three largest public clouds can deploy in multiple geographies with near-optimum resource efficiency, but that’s not the biggest barrier.

Over the decade since public cloud services were first offered, our notion of the cloud has changed radically. At first, people thought the cloud was essentially a virtual-machine-as-a-service, a place that either everything was going to live eventually (wrong) or at least where server consolidation would come to roost. Over time, it’s become clear that the cloud is a partner to the data center, a place where, in particular, application front-end elements relating to the user experience could be hosted. The mission gradually drove the expansion of “web services” or the creation of a “platform-as-a-service”.

Today’s cloud has dozens of hosted features, available to developers via APIs, and multiple hosting options beyond IaaS. Applications for public clouds could be built without these features, but it would be more difficult and require more expertise on the part of developers. This rich set of services would create a major barrier to market entry for other cloud aspirants, and interestingly enough, Cloudflare isn’t (at least now) actually proposing to build most of those features. Instead, it seems to be concentrating on one—cloud data storage.

Databases in the cloud are expensive on two levels. First, the storage itself is (according to many enterprises I’ve chatted with) expensive enough to rule it out for some applications. Second, companies like Amazon (with S3) charge for data entry and egress, and that makes it particularly difficult in hybrid cloud applications where both cloud and data center elements have to access the same data. What Cloudflare proposes to do is to make storage cheaper and eliminate access charges where data has to cross cloud boundaries.

That may be the sleeper issue here. The public cloud giants charge for “border crossing” traffic overall, and Cloudflare created the Bandwidth Alliance to provide for mitigation of these charges by having cloud providers link their networks to reduce the costs. To quote Cloudflare, “Our partners have agreed to pass on these cost savings to our joint customers by waiving or reducing data transfer charges.” Smaller cloud providers are limited in their ability to match the geographic and feature scope of the giants, and that means that they have to focus on a best-of-breed model. However, those border-crossing charges make cross-cloud-boundary transfers more expensive. It’s interesting to note that the number one public cloud provider, Amazon, is the specific target of the savings calculator Cloudflare offers.

Multi-cloud promotion would allow “niche” cloud providers who focus on a specific feature set, either competing with the giants or creating something that they’ve been unwilling or unable to offer. If we had a means of reducing access charges significantly, we could see an explosion in multi-cloud, and that would almost surely drive an explosion in the number of public cloud providers. It would also benefit players like Cloudflare, who offer services that would increasingly commit users to a multi-cloud deployment. Cloudflare’s R2 offering of cloud storage is an example, though the service isn’t yet available. That would mean a transformation of how multi-cloud is used.

Today, enterprises use multi-cloud primarily for backup, or to parallel multiple providers to optimize geographic coverage where one provider can’t offer it where it’s needed. Best-of-breed multi-cloud would mean that almost every enterprise could use multi-cloud for almost any application, and that could vastly accelerate cloud usage. The downside, of course, is that that greater usage would divide revenues across a larger field of competitors, and market leaders like Amazon today, and perhaps Google and Microsoft tomorrow, might see the downside risk in the near term larger than the total-addressable-market (TAM) gain in the long term. Today’s money is always more appealing.

Another interesting question that this raises is whether the whole notion of the Bandwidth Alliance wouldn’t convert the cloud from today’s separate players to something more like the Internet, where the players share an underlying network framework. That in turn might blur the boundary between cloud and Internet.

Edge computing could be a factor here, for a number of reasons. The obvious one is that by definition, “the edge” is much more geographically diverse than “the cloud”. In the US, for example, my model says that you could achieve a near-optimum cloud with only 3 major data centers (neglecting availability issues) and that more than a dozen would likely be inefficient. In contrast, you’d need a minimum of 780 edge data centers to field a credible edge offering nationally, and you could justify over 16,000. Globally, edge computing could justify 100,000 data centers, and surely no single player could hope to deploy that much.

“Competitive overbuild” doesn’t work at the edge; the cost is too high. We need “cooperative edge built” instead, and that’s not going to work unless you have “Internet-like” peering among edge (and cloud) providers. Or, of course, if public cloud providers partner with somebody who has edge real estate and create “federations”. That’s what seems to be happening with the cloud-provider-and-telco partnerships on the edge today.

The Internet is a prime example of the value of community over the benefit of exclusivity. The cloud, not so much. The edge could be more Internet-like in terms of value, and that might drive changes in the way that cloud providers and edge providers do peering and charge for data border crossings. If it does, then it will likely benefit us all…except perhaps those with a chance to win in an exclusivity game.

How could a populist edge, promoting a populist cloud, change things? The edge would have to be the driver, because of that clear need for cooperation, but we’re still struggling with how to make the edge, meaning edge computing, a reality. We lack the model to make it work, the “worldwide web” application framework that made the Internet what it is. The question for Cloudflare is whether they understand that, and are prepared to stake a claim in creating that essential edge model. If they are, then they do have a shot at being the fourth public cloud giant, and they even have a shot of moving up in the rankings.

A New ONF Spin-Out Could Transform Private 5G

I have to admit I was surprised when I read that the ONF was launching a private venture, Ananki, to bring its Aether 5G open-model network implementation to market. As you can see in the release, Ananki will target the “private 5G space” and will use the ONF’s Aether, SD-RAN, SD-Fabric and SD-Core technologies. The new company will be backed by venture funding, and it will apparently target the “machine to application” or IoT market for private 5G specifically.

I don’t know of another industry body like the ONF that has spun off a company to sell something based on its implementation. It would seem that a move like that could compromise the group’s vendor members and alter its mission, though the ONF says it won’t be doing anything different as a result of the spin-out. It certainly raises some interesting market questions in any case.

I’ve always been skeptical about private 5G, even for IoT applications. It’s not a matter of private-IoT 5G not working, as much as it not being worth the effort when other technologies are suitable and easier to adopt and use. WiFi is obviously the best example, and the ONF says its goal is “making private 5G as easy to consume as Wi-Fi for enterprises.” Ananki proposes to do that by deploying its private 5G on public cloud infrastructure, creating a kind of 5G-as-a-service model, and I think this could be a breakthrough in at least removing barriers to private 5G adoption. There is, of course, still the matter of justification.

Details are still a bit sparse, but the Ananki model is a 5G plug-and-play. You get private white-box radios from either Ananki or one of their certified suppliers, SIM cards for your devices, and they use SaaS to spin up a private 5G cloud-hosted framework on the cloud of your choice. You manage everything through a portal and pay based on usage. All this is admittedly pretty easy.

Easy is good here, because enterprises are almost universally uncomfortable with their ability to install and sustain a private 5G network. In fact, fewer than 10% of enterprises say they would know how to build a private 5G network. Since other research I’ve done suggests that very few enterprises (again less than 10%, which is roughly the statistical limit for this) who don’t know how to do something would even explore whether the “something” could make a business case, this reduces the prospect base for private 5G.

Which isn’t the same as saying that making 5G easy makes the business case, of course. The Ananki model is going to create a cost that WiFi or other traditional IoT network technologies might not create. That complicates the business case. The flip side is that private 5G might support some IoT applications more easily than one of the network alternatives. In the net, I think it’s safe to say that there is an opportunity base for the Ananki offering, and I also think it’s safe to say that it would be larger than the opportunity base for other private 5G models, such as those from the mobile network incumbents. How big, and how much bigger, respectively, I cannot say with the data I have.

Before we write this discussion off as an intellectual exercise, consider this truth. A 5GaaS offering based on open-source, fielded by a company that’s established as a “Public Benefit Corporation”. This isn’t a common designation, but in short it’s for-profit company whose board is free to consider the stated public benefit cited in its charter as the basis for decisions, and not just shareholder value. Were the technology foundation of Ananki proprietary, their cost base would be higher and their customer offering more expensive. Were Ananki a traditional corporation, it would have to evolve to maximize shareholder value and the board might be challenged by 5G- or open-network-promoting decisions that didn’t benefit shareholders first.

Open-source is a cheaper framework for something like this. Open-source ONF Aether technology is what Ananki packages. Could that technology be packaged by others? Sure, or it wouldn’t be open source. Could other open-source network technology be packaged by someone to do the same thing? Sure. Could an open body create a cookbook so enterprises could package the necessary technology on their own? Sure. In other words, this approach could be extended and made competitive, creating market buzz, alternatives in approach, other features, and so forth.

It might also be extended. The difference between public and private 5G comes down to spectrum and licensing. Aether is the basis for a DARPA (Defense Advance Research Projects Agency) project, Project Pronto, to create a platform for secure, reliable, research communications. DARPA’s origin was ARPA (without the “Defense”) and ARPANET is seen by most as the precursor to the Internet, so might Project Pronto launch something bigger, broader, and even become a model for service providers? Sure.

The ONF is, IMHO, an underappreciated player in the 5G space, as I noted in an earlier blog. Their Aether model takes open-model 5G beyond O-RAN and frames a very complete open infrastructure model for 5G. It’s possible that Ananki will drive that model to private 5G commercial reality, but even if it doesn’t, Ananki will validate the notion of open-model 5G from edge to core, and that will surely influence how service providers view 5G infrastructure. That might make it the most significant contribution to open-model 5G since O-RAN.

Mobile operators, and operators in general, are increasingly antsy about proprietary lock-in, as a recent story on Vodafone shows. Nokia’s decision to embrace O-RAN shows that even mobile network vendor incumbents recognize that there’s growing demand for a more open approach to network infrastructure. It could be that the Ananki model the ONF has devised will provide a more effective pathway to that.

Certainly it will be a test case, and one test that’s going to be interesting is the test of the way network equipment vendors respond. All network-operator-oriented standards-like groups tend to be dominated by equipment vendors because there are more vendors than network operators, and because network vendors have strong financial incentives to throw resources into these initiatives. It’s not only about contributing and learning, but also about controlling and obstructing. I’ve been involved in many of these initiatives, and I’ve never seen a single case where at least one vendor didn’t throw monkey wrenches into some works.

The big question with ONF/Ananki is whether the spin-out model that obviously worked for the ONF would now work for other standards bodies. The first time something like this is done, it could sneak through because opposition by vendors hasn’t really developed. If Ananki shows signs of failing, then vendors can paint the failure as a failure of open-model advocates to field anything that can actually be deployed. If it shows signs of succeeding, then will vendors try to ensure other bodies don’t follow the ONF’s approach?

Open model networking has a challenge that open-source doesn’t share. You can start something like Linux or even Kubernetes with a single system or cluster, respectively. Networks are communities, and so the first real implementation of a new strategy has to be a community implementation. Ananki is a path toward that, and while it may not be the only way to get to open-model networking in the future, it may be the only way that’s currently being presented for mobile infrastructure. In short, Ananki could revolutionize not just private 5G but open-model 5G overall.

Will the CNF Concept Fix NFV?

Recently, I blogged that the transformation of NFV from VNFs (VM-centric) to CNFs (“cloud-native” or “containerized” network functions, depending on your level of cynicism) was unlikely to be successful. One long-time LinkedIn contact and fellow member of standards groups said “If CNF never makes it… then the whole story is doomed and it’s a bad sign for Telcos’ future.” So…will CNF make it, and if it doesn’t then is the whole NFV story doomed, and if that is true, is it a bad sign for Telcos’ future? Let’s see.

Red Hat is one of the many cloud/software players who embraced NFV as part of its telco software story. They have a web page on the VNF/CNF evolution, and I want to use that as an example in the rest of this discussion. Not only is Red Hat a premier player, their stuff represents a “commercial” view of the situation rather than the view of standards-writers, which are often a bit obscure.

The Red Hat material starts with a true statement that, nevertheless, needs some amplification. “Virtual network functions (VNFs) are software applications that deliver network functions such as directory services, routers, firewalls, load balancers, and more.” That’s true, but it’s a specific example of the unstated general case that VNFs are a hosted/virtual form of a physical network function, or PNF, meaning some sort of device. The original NFV model was all about replacing devices, and routers were actually not on the original list.

The PNF origins meant that “In the initial transition from physical elements to VNFs, vendors often simply lifted embedded software systems entirely from appliances and created one large VM.” There was some debate on whether “decomposition” of existing PNFs into components should be required, but that was contrary to the base PNF-centric mission and had (obviously) little vendor support. Thus, VNFs were virtual devices, monoliths.

It took time, more than it should have frankly, but the NFV community eventually realized they needed something different. “Moving beyond virtualization to a fully cloud-native design helps push to a new level the efficiency and agility needed to rapidly deploy innovative, differentiated offers that markets and customers demand.” Since the cloud was shifting toward containers rather than VMs, “different” morphed into “containerized”. “CNF” could be said to stand for “containerized network function”, and to some it did mean that, but as the cloud became the specific target, CNF turned into “Cloud-native Network Function”.

Containers, of course, are not automatically cloud-native, and in fact my survey of enterprises suggests that most containerized applications aren’t cloud-native at all; they are not made up of microservices and are not fully scalable and resilient. Containers are a step forward for VNFs, but we might be better thinking that the real goal is a “CNNF”, which obviously means a cloud-native network function. The CNNF concept would admit to a service built from functions/lambdas, hosted serverlessly but not in containers, and also focus on the harmony with the cloud.

The final thing I want to pull from Red Hat is this interesting. Referencing the need for an open, consistent, foundation for telcos, they say: “Building that foundation on NFV (with VNFs) and especially cloud-native architectures (with CNFs) results in improved flexibility and agility.” This defines what I think is the critical shift in thinking. NFV means VNFs, and cloud-native or CNFs means not NFV, but cloud architectures. Red Hat is preparing a graceful transitioning out of NFV into the cloud, retaining the notion of network functions but not the baggage of NFV.

Or maybe not, or maybe only some. If we assume containerized cloud-native elements, then we can assume that services built with CNFs would have all the container elements needed to deploy on an arbitrary cluster of resources (the “telco cloud”); they carry their instructions with them. A service could be visualized either as a set of functions that created a virtual device (what NFV would have treated as a monolith), or as a set of functions, period. That would seem to substitute cloud resource management and orchestration for NFV’s MANO, a cluster or clusters for NFVI, and CNFs for VNFs. One thing left is the notion of VNFM.

The goal of VNFM was/is to present function management in the same way that device management was presented when a VNF was created from a PNF. We can’t expect the cloud to manage network functions with cloud-specific tools; the CNFs are “applications” to the cloud, and their management would be more specialized. There’s also the question of the extent to which function management has to be aware of function hosting, meaning the underlying resources on which the CNFs were deployed. NFV never really had a satisfactory approach to that, just a name and a loose concept of PNF/VNF management equivalence.

CNFs could, then, fall prey to this issue. Before NFV ever came about, I’d proposed that hosted network features had to have a management interface that was composed rather than expressed, using what I’d called “derived operations”. This was based on the IETF draft (which, sadly, didn’t go anywhere) called “Infrastructure to Application Exposure” or (in the whimsical world of the Internet) i2aex. You used management daemons to poll everything “real” for status and stored the result in a database. When a management interface was required, you did a query, formatted the result according to your needs, and that was your API.

The advantage of this approach is that it lets you grab status information from shared resources without exposing those resources to services, which could at best overload resources with status polls, and at worst lead to a security breach. We don’t have that in CNFs as the NFV ISG sees them, and Red Hat doesn’t appear to assume the approach either.

VNFM seems to be the only piece of NFV that’s 1) needed for cloud-native function virtualization, and 2) not specifically adapted to the cloud by the CNF initiative. Since I would argue that the NFV ISG version of VNFM wasn’t adequate for multi-tenant services in the first place, adapting that wouldn’t be worth the effort. Since the cloud really isn’t addressing the specific issues that VNFM did (inadequately, as I’ve said), we can’t expect the cloud to come up with its own strategy.

When I advocate that we forget NFV, write off the effort, I’m not suggesting that we don’t need something to support hosted virtual functions in networking, only that NFV isn’t and wasn’t it. I’d also suggest that if the NFV ISG and NFV supporters in the vendor community think that CNFs are necessary, then they should accept the fact that just having CNFs doesn’t make NFV a cloud technology. We need the equivalent of VNFM, and I think the i2aex model I mentioned is at least a credible pathway to getting that. It may not be the only one, but it’s at least an exemplar that we could use to undertake a broader search.

Where does this lead us in answering the question I raised at the start of this blog? Vendors are answering it by blowing kisses at NFV while quietly trying to do the right thing, IMHO. That’s better than doing the wrong thing, but it means that the NFV ISG initiatives aren’t working, and won’t work, and the reason is that standards-related bodies worldwide have always been reluctant (well, let’s be frank, unwilling) to admit their past efforts were wasted. We’ve spent a lot of time trying to make cosmetic changes to NFV rather than substantive ones, all the while ignoring the truth that the cloud has passed it by in almost every area, and leaving the one area where the cloud probably won’t help lie open. CNFs won’t fix NFV, and if that means it’s in trouble, then we’d better start dealing with that.

Nokia, 5G Disruption, and 5G Realization

Some on Wall Street think Nokia is a disruptor in disguise, reinventing itself quietly to seize control of networking through 5G. I don’t agree with the article’s like that “5G is the Next Industrial Revolution” (hey, this is media, so do we expect hype or what?) but I do think that the article makes some interesting points.

Network transformation requires money to fund the transforming, period. That’s the irreducible requirement, more so than any specific technology shift or service target. In fact, those things are relevant only to the extent that they contribute to the “money”. What makes 5G important isn’t that it’s revolutionary in itself, but that it’s a funded step on the way to a different network vision. The revolution isn’t 5G, but what 5G could do to change how we build network services. Emphasis on the “could”.

I’ve blogged about Nokia and 5G before (HERE and HERE), particularly with regard to its fairly aggressive O-RAN commitment. I believe that O-RAN is the key to getting a new open-model architecture for network infrastructure into play, and that it’s also the key to starting a 5G-driven transformation of network-building. But if 5G is only a stepping-stone, then Nokia needs to support the pedestal it leads to and not just the step. Do they know that, and can they do it?

If we had ubiquitous 5G, would it get used in IoT and other applications? Surely, providing those applications could create a service model that delivered that irreducible “money” element. 5G does not remove all the key barriers to any of these applications, and for most it doesn’t remove any barriers at all. Yes, enormous-scale public-sensor IoT (for example) could demand a different mobile technology to support it, but we don’t have that and we don’t have clear signs that we’re even headed there. That’s the challenge that Nokia faces for it to be able to exploit whatever the “Next Industrial Revolution” really is.

If that next industrial revolution isn’t driven by 5G, it’s largely because it’s not driven by connectivity alone. Applications are what create value, and delivering that value is the network’s mission. Does Nokia realize that, and have they taken steps to be an application-value player? Indications are there, but not prominent.

Nokia’s website has two “solutions” categories that could validate their effort in being a player in the creation of network-transforming applications, both under the main category of IoT. The first is IoT Analytics and the second IoT Platforms. Unfortunately, IoT Platforms is all about device and connection management and not about hosting IoT applications. IoT Analytics does have useful capabilities in event correlation, analysis, and business responses. Since the article I referenced at the start of this blog is really largely about IoT, you could take Nokia’s IoT analytics as a step toward realizing the “disruptor” claim.

The problem is that every public cloud provider offers the same sort of toolkit for IoT analytics, and there’s a substantial inventory of open-source and proprietary software that does the same thing. If you explore Nokia’s IoT strategy, it seems to me that it’s aimed less at the enterprises and more at service providers who want to serve those enterprises. Those service providers would still need to frame a service offering that included Nokia’s IoT elements, but couldn’t likely be limited to them because of competition from public cloud and open source. They’d also have to overcome their obvious reluctance to step beyond connection services, and that might be the tallest order of all.

There’s also a bit of a Catch-22 with 5G and IoT and other edge applications. The applications themselves would surely roll out faster if they weren’t 5G-specific, since connectivity can be provided by LTE or even WiFi in many cases. The problem for Nokia is that a decision to accelerate the applications by making them more generally dependent on connectivity would then mean they wouldn’t pull through Nokia’s 5G story. That could mean a lot of Nokia’s disruptor potential disruptor status might be threatened. It’s going to be interesting to see how Nokia balances this over the rest of 2021 and into next year.

Is Verizon’s MEC “Land Grab” Really Grabbing any Land?

Verizon thinks it’s out front in what it calls an enterprise “land grab” at the edge. Of course, everyone likes to say they’re in the lead of some race or another, and Verizon’s position in the edge is really set by a deal with Microsoft and Amazon. Does this mean that they’re just resellers and not in the lead at all, or that maybe there are factors in establishing leadership that we’ve not yet considered?

One thing that jumps out in the article is the references to Mobile (or Multi-Access) Edge Computing or MEC, versus the more general concept of edge computing. The article blurs the distinction a bit, quoting an investor transcript where Verizon’s CEO says “We are the pioneer and the only one in the world so far where we launch mobile edge compute, where we bring processing and compute to the edge of the network for new use cases.” That implies a more general edge mission. However, the same transcript quotes the Verizon CEO saying “First of all is the 5G adoption, which is everything from the mobility case, consumers and business and then fixed wireless access on 5G.” This seems to focus everything on 5G and even private 5G.

There’s some hope that other parts of the transcript could bring some clarity to the picture. Verizon also said that “We have Amazon and Microsoft being part of that offering. They are a little bit different. One is for the public mobile edge compute and one is for the private mobile edge compute.” That’s hardly an easy statement to decode, but let’s give it a go, based on Verizon’s website and previous press releases.

“Public” and “private” MEC here refer both to whether public or private 5G is used and to whether the cloud-provider-supplied MEC hosting is linked with Verizon’s actual service edge, or whether it’s hosted on the customer premises. The Amazon Wavelength offering is integrated at the Verizon edge (the public mobile edge), and the Microsoft Azure relationship uses Verizon’s 5G Edge to support a private RAN (LTE or 5G) and host Azure components on the customer’s premises (the private mobile edge).

In both cases, the goal of MEC is to introduce a processing point to latency-sensitive applications that sits close to the point of event origination, rather than deep inside the Internet/cloud. Where there’s a concentration of IoT devices in a single facility, having MEC hosted there makes a lot of sense. Where the IoT elements are distributed or where the user doesn’t have the literacy/staff to maintain MEC on premises, a cloud option might be better.

Supporting both, and with different cloud partnerships, seems aimed more at creating cloud provider relationships than about actually driving user adoption. Most users would likely prefer a single model, and of course either of Verizon’s MEC options could (with some assistance, likely) be made to work either as cloud services or as a premises extension of the cloud. That’s not really made clear, which seems to cede the responsibility for creating real users, real applications, to somebody else.

One interesting point the article and transcript make is that Verizon is saying it doesn’t expect to see meaningful revenue from edge services until next year. Add to that the fact that Verizon’s CEO says they’ve “created” the market and you have to wonder whether there’s a lot of wishful thinking going on here. The biggest wish, obviously, is that somebody actually builds an application that can drive the process, make the business case for MEC and itself. Who would that be? There are three broad options.

Option one is that the enterprises build their own applications and host them on Verizon’s MEC solution. Verizon’s material suggests that this is the preferred path, but since the material is web collateral it’s possible that the preference is just a bias toward the kind of organization who’d likely be looking for MEC offerings rather than IoT applications. Verizon’s role in this is valuable to the extent that it has some account control among these prospective buyers.

Option two is that public cloud providers would build their own applications and offer them as a part of Verizon’s MEC. This option could be more populist; smaller users without their own IoT development capability could easily adopt a third-party service. However, it could be a complicated option to realize because the cloud providers already have edge computing strategies and application tools, and Verizon is unlikely to have great influence in smaller firms to justify their taking a piece of the action. The ability to integrate with Verizon’s network (the Amazon Wavelength variant) could demonstrate a clear Verizon benefit, though.

This is the option that Verizon seems to be pursuing, at least as far as what they’ve told Wall Street. At the Goldman Sachs Communicopia event, they indicated that they were getting traction from enterprises on private 5G and were working with the cloud providers on edge computing applications. I can’t validate either from my own contacts with enterprises, but it does seem that the public cloud deals option would be the one most likely to bear fruit in the near term.

The final choice would be that third-party developers would use the Verizon MEC service. This would empower users of all kinds if it could be made to work, but it’s difficult to see how Verizon would be able to create a good program. Their IoT developer program was focused on pure IoT connectivity, and Verizon doesn’t have any particularly credible account relationship with the software/application side of enterprise CIO organizations.

If we assume that the most certain path to business success is to own your target market, it’s hard to see how Verizon’s “land grab” grabs any useful land under any of these three options. What seems to be on the table is simply a commission on selling cloud services someone else creates. It’s not that the options aren’t viable application pathways, as much as that they’re not particularly centered on Verizon and would be difficult to realign without considerable Verizon effort. Effort, sad to say, that’s not likely to be forthcoming. If we stay with the “land grab” analogy here, what Verizon seems to have grabbed is the cornfield in the Field of Dreams.

Why Not NFV?

I’ve blogged a lot about the relationship between 5G and edge computing. In most of my blogs I’ve focused on the importance of coming up with a common software model, a kind of PaaS, that would allow 5G deployment to pull through infrastructure that would support generalized edge computing. Most of those who have chatted with me on that point feel that “the cloud” offers the path to success, but a few wonder why 5G’s NFV (Network Function Virtualization) reference doesn’t mean that NFV is the solution. Obviously, we need to look at that question.

The fundamental goal of NFV was to convert network appliances (devices) from physical network functions or PNFs to virtual network functions or VNFs. The presumption inherent in the goal is that what is hosted in NFV is the equivalent of a device. There may be chains of VNFs (“service chaining”), but these chains represent the virtual equivalent of a chain of devices. Not only that, service chains were presumably connected by “interfaces” just like real devices, and that means that the concept of a “network” or “service” had to be applied from the outside, where knowledge of the (popularly named) “gozintos” (meaning “this goes into that”) is available.

One reason for this was that the NFV ISG wanted to preserve the management/operations framework of the network through the PNF-to-VNF transition. In short, a VNF should appear to a management system that managed PNFs as just another device. The only incremental management/operations requirement that NFV should create is associated with the aspects of a VNF that don’t apply to PNFs. You don’t “deploy” a PNF in a software sense, nor do you have to manage the hosting resources, so stuff like that was consigned to the Management and Orchestration (MANO) or VNF Manager (VNFM), and the Virtual Infrastructure Manager (VIM).

5G specifications from the 3GPP, which have evolved over a period of years, evolved as other 3GPP work did, meaning they assumed that the functional elements were devices with device-like interfaces. 5G used NFV because 5G defined what NFV was virtualizing, in short. If we could say that generalized edge applications were (like 5G) based on virtualizing devices, this model would work, at least to the same extent that NFV works overall.

Well, maybe not totally. One issue with NFV that emerged more from the evolution of the proof-of-concept trials and vendor interests was that NFV turned out to be focused on services deployed one-off to customers. The most popular concept in NFV is universal CPE (uCPE), which is a generalized device host for an inventory of per-customer service features. NFV didn’t really address the question of how you’d deploy shared functionality.

I’ve said many times that I do not believe that NFV created a useful model for virtual function deployment, so I won’t recap my reasons here. Instead, let me posit that if NFV were the right answer, we would first see a bunch of NFV adoptions, and we’d see NFV incorporated in 5G broadly. Neither is the case, but let me focus on the second point here.

O-RAN is the most successful virtual-function initiative in all of telecom. What’s interesting about it from the perspective of edge computing is that O-RAN’s virtualization model (O-Cloud) is explicitly not based on NFV elements. Yes, you could probably map O-Cloud to the NFV Infrastructure (NFVi) of the NFV ISG specs, but the actual connection point is described in current material using terms like “cloud stack”. That means that just as you could map O-Cloud to NFV, you could also map it to VMs, containers, Kubernetes, and so forth. It’s cloud and not PNF in its model.

One obvious consequence of this philosophical shift is that the MANO and VNFM elements of NFV are pushed down to become part of the infrastructure. Whether it says so or not, O-RAN is really defining a PaaS, not the server farm which is the presumptive NFVi framework. The VIM function in O-RAN is part of O-Cloud, and there is no reason why “O-Cloud” is anything other than some generalized cloud computing framework. Thus, at this level at least, O-RAN is a consumer of edge services where NFV defines device virtualization services.

From this so far, you might be inclined to argue that the differences between the cloud and NFV approaches are little more than semantics. Couldn’t you consider any feature/function as a device? Isn’t NFV already pushing to accept containerization and not just virtual machines? Well, that’s the problem with simplification; it can lead you astray. To understand what the issues are, we have to do some further digging.

NFV, strictly speaking, is about deploying virtual devices more than creating services. The service management functions required by operators are presumably coming from the outside, from the OSS/BSS systems. In the cloud world, an “application” is roughly synonymous with “service”, and orchestrators like Kubernetes or Linkerd deploy applications using a generalized tool.

O-RAN, strictly speaking, deploys 5G RAN elements, so it’s a bit of a one-trick pony. Its service knowledge is embedded in the RAN/Radio Intelligent Controller (RIC) components, both the near-real-time and non-real-time pieces. The responsibility for management and orchestration of the pieces of O-RAN rest with them, and so you could argue that the RICs combine to act almost like an OSS/BSS would act in the NFV world, were we talking about a customer service (what NFV targeted, you’ll recall) and not a multi-tenant service like 5G.

In order to make NFV work for O-RAN, and for 5G overall, you’d need to add service knowledge, a model of the service. Even ONAP, which presumes to be the layer above NFV’s elements in the ETSI approach to virtualized functions, doesn’t have that (which is why I told the ONAP people I wouldn’t take further briefings till they supplied and properly integrated the model concept). That would be possible, but in the end all it would to is allow other deeper issues with NFV to emerge.

The long and short of NFV is that it isn’t a cloud-centric approach to hosting functions, and since hosting functions of any sort is really a cloud function, that’s a crippling problem. The cloud has advanced enormously in the decade since NFV came along, and NFV has struggled to advance at all. Some of that is due to the fact that NFV efforts aren’t staffed by cloud experts, but most is due to the fact that there are simply not very many people working on NFV relative to the number working on the cloud. A whole industry has developed around cloud computing, and you can’t beat an industry with a cottage industry. That’s what NFV is, in the end.

Technically, what should NFV be doing? There is really nothing useful that could be done at this point, other than to admit that the initiative took the wrong path. Whatever progress we make in function hosting in the future, in 5G, edge computing, IoT, or anywhere else, is going to be made in the cloud.

What Can We Learn from O-RAN’s Success?

According to a Light Reading article on Open RAN, “The virtualized, modular RAN will be here sooner rather than later and vendors will be tripping over each other as they try to get on board.” I agree with that statement, and with much of the article too. That raises the question of just what the success of an open-model RAN (O-RAN, in particular) will mean to the marketplace, buyers and sellers.

There is no question that the relationship between hardware and software has changed dramatically, and the changes go back well beyond the dawn of Linux where Light Reading starts its discussion. Back in the 1970s, we had a host of “minicomputer” vendors, names like Data General, DEC, CDC, Perkin-Elmer, and more. You don’t hear much about those players these days, do you? The reason is software. In the early days of computing, companies wrote their own software, but that limited computing growth. Third-party software was essential in making computing pervasive, and nobody was going to write software for a system that hardly anyone had. The result was a shift to an open-model operating system that could make software portable, and it was UNIX at the time, not Linux, but Linux carries the water for open-model hosting today.

What we’re seeing now, with things like O-RAN and even white-box networking, is the application of that same principle to the networking space. 5G is demonstrating that hosted functions can play a major role in mobile networks, and they already play a major role in content delivery. Security software, which is an overlay on basic IP networking, is demonstrating that same point. How long will it be before we see the same kind of shift in networking that we’ve already seen in computing? This is the question that Cisco’s software-centric vision of the future (which I blogged on yesterday) should be asking. Short answer: Not more than a couple years.

The O-RAN model is particularly important here, not because it’s a new thing (as I just noted, it’s just the latest driver toward openness), but because it’s a bit of a poster-child for what’s needed for something that’s clearly in the best interest of the buyer to overcome seller resistance.

O-RAN as a standards-setter is dominated by operators, something that vendors have always hated and resisted. Past efforts to let network operators dominate their own infrastructure model have been met with resistance in the form of (at the minimum) vendor manipulation and (at worst) threats of regulatory or anti-trust intervention. While the O-RAN Alliance has recently had its share of tension, they seem to have navigated through it.

Why is this important? Well, Linux was the brainchild of Linus Torvalds, a legendary/visionary software architect who did the early work, building on the APIs that UNIX had already popularized. Other open-source projects have been projects, and increasingly projects under the umbrella of an organization like the Linux or Apache foundations. In short, we evolved a model of cooperative design and development, and one of the most important things about O-RAN is that it’s making that model work in the telecom space, where other attempts have failed.

It’s also important because of the unique role that 5G and O-RAN are likely to play in edge computing. Any salesperson will tell you that the first test of whether someone or some organization is a “good prospect” is whether they have money to spend. 5G has a budget and budget momentum, which means that a big chunk of carrier capex for the next three years or so will be focused on 5G infrastructure. What will that infrastructure look like? O-RAN’s goal is to ensure it doesn’t look like a traditional network, a vendor-proprietary collection of boxes designed to lock in users. Open-model 5G, including O-RAN, could deliver us closer to the point where software is what’s important in networking, and devices are just what you run the software on.

What does this have to do with the edge? The answer is that if O-RAN, and 5G in general, delivers a “middleware” or “PaaS” that can not only support 5G elements, but also elements of things like CDNs or general-purpose edge computing, or (dare we suggest!) IoT, then that set of software tools become the Linux of networking.

The rub here, of course, is that Linux had the UNIX APIs (actually, the POSIX standard set from them) to work from, and for networking we’re going to have to build the APIs from the tools, designing the framework for edge hosting based on (at least initially) a very limited application like 5G/O-RAN. Not only is that a challenge in itself, 5G in its 3GPP form mandates Network Function Virtualization (NFV), which is IMHO not only unsuitable for the edge mission overall, but unsuitable for 5G itself.

O-RAN has at least somewhat dodged the NFV problem by being focused on the RAN and the RAN/Radio Intelligent Controller or RIC, which is outside the 3GPP specs. This happy situation won’t last, though, because much of the RAN functionality (the CU piece of O-RAN) will likely be metro-hosted, and so will 5G Core. The latter is defined in NFV terms by the 3GPP. Will the 3GPP change its direction to work on 5G as an edge application? Doubtful, and even if it did, it would likely take five years to do, and thus be irrelevant from a market perspective.

It also seems unlikely that the O-RAN Alliance will expand its scope (and change it’s name?) to address either 5G Core or edge computing in general. There’s little sign that the operators, who drive the initiative, are all that interested, likely because they’ve supported NFV and don’t see any need to expand themselves into the edge at a time when they’re trying out cloud provider relationships to avoid that very thing. All these factors would tend to make another operator-driven alliance to address the edge issue unlikely to succeed as well.

So are we to wait for Linus Torvalds to rescue us? Well, maybe sort of, yes. It may be that a vendor or perhaps a few vendors in concert will have to step up on this one. The obvious question is which vendors could be candidates. Software-side players like Red Hat or VMware have 5G credentials and understand cloud computing, but they also seem wedded to NFV, which is useless for generalized edge computing. Network vendors have generally not been insightful in cloud technology. Cloud providers would surely have the skills, but would surely be trying to lock operators into their solution, not create an open model, and that’s not likely to be accepted.

The big lesson of O-RAN may be that we’re only going to get effective progress in new applications of technology when users rather than vendors dominate the efforts. The best of open-source has come from executing on a vision from a visionary. We need to figure out how to turn buyer communities into visionaries, and that’s the challenge that we’ll all confront over the coming years.