How Much of the Cloud is Just Fog?

We’ve started to see more stories about companies not “moving everything to the cloud” but instead moving everything off the cloud. The most publicized and analyzed examples of this is collaborative service provider Basecamp a who dropped cloud services costing $3.2 million in favor of in-house servers. This is a very radical example of a trend I’ve noted in the past. Most enterprises have identified at least one cloud application that broke the bank, cost-wise, and in most cases their problems seem to have stemmed from the same factors that Basecamp reported.

To start with, the notion of moving everything to the cloud has had a significantly negative impact on CIOs and IT planners. It’s led to the assessment that anyone who plans or maintains a data center application is a Luddite, and it’s contributed to what one CIO I chatted with called “gaming the numbers to get the right answer”. That means that many cloud application plans make all manner of favorable and unjustified assumptions to make the cloud look cheaper, when in fact it is almost surely not going to be.

Here’s a basic truth the article I’ve cited illustrates. There are no situations where the cost of a VM in the cloud will be less than the cost of the same VM in a well-designed data center. What makes the cloud cheaper is a lack of hosting scale that’s unusual for an enterprise, a need for scalability or elasticity, or a need to distribute hosting over a wide geography. The economies of scale a cloud provider can generate are, according to enterprises, only minimally better than that available in an enterprise data center. The difference is more than covered by the profit margins of the cloud provider. So to start with, cloud benefits are small enough to require protection, which leads to the second truth.

Cloud development practices that are widely promoted are usually drawn from OTT players rather than enterprises. They’re supporting global user bases and message-based exchanges among users rather than transaction processing. Enterprise IT is the opposite, but the “move everything” mantra encourages enterprise IT architects and developers to forget that. The first big problem that creates is database costs.

The article shows that the largest component of Basecamp’s cloud costs was the cost of cloud RDBMS. Enterprises often move data to the cloud because having a cloud application access databases stored on the premises creates traffic charges. The problem is that hosting the data in the cloud is costly, and unless the data is fairly static you end up paying to keep it synchronized with mission-critical databases associated with transaction processing that’s not been moved.

Then there’s the problem of componentization. Scalability is difficult to achieve without explicit state control, because user interactions are typically multi-step and where you are in the stages of the interaction has to be maintained somewhere. Sometimes this results in each piece of a multi-step interaction being handled by a different component, and that increases hosting costs. Sometimes a database is used to hold state so the components themselves are stateless and scalable, but that increases database costs again.

Scalability is itself the biggest source of cloud cost overruns. Yes, one of the big benefits of the cloud is that you can accommodate changes in load by scaling components, but of course that results in scaling of costs. Enterprises admit that in many cases they’ve adopted scaling strategies that did little to really change user quality of experience but increased costs significantly. What they realized they needed to do was to assess the way a multi-step interaction evolved. In many cases, there was considerable “think time” associated with a given step, and such steps didn’t gain much from scaling. The role of the user, and the role of software, have to be considered throughout a multi-step interaction to make a good decision on what benefits from scaling, and what limits should be imposed.

One enterprise described their decision to shift the front-end processing of a single application to the cloud. The application was being run using a maximum of four hosted containers in their data center. When they shifted it to the cloud, even with constraints on scaling and database costs, it ended up costing them six times as much because they allocated more resources than necessary and because their database costs weren’t properly assessed. The CIO said that even if they simply adopted a cloud container hosting service for four containers, it would have cost twice as much as it did in the data center.

Given stories like these, it’s hard not to wonder if the whole public cloud thing is a scam on enterprises. It’s not. The problem is our tendency to gravitate to simple things rather than to address complexity. There are things that the cloud is great at. There are applications where OTT development practices will enhance application performance and enhance user QoE. There are also applications that result in the opposite outcome, and it’s those stories we’re now hearing. The challenges of proper cloud usage, and the penalties for not ensuring it, haven’t really changed. What’s changed is the way they’re being discussed.

How it’s discussed isn’t changing enough, though. We’re accepting that the cloud isn’t naturally cheaper than the data center for simple VM hosting. We’re starting to accept that everything is not moving to the cloud, but we’re not yet discussing when the cloud is a better choice, why it is, and what’s really happening with cloud adoption. That’s making it harder to address the really important question, which is “What role will the cloud play in the future of IT?”

Simplistic answers to questions are comforting, which is why we fall into the “simplicity delusion” all too often. The cloud is a bunch of different things, ranging from a way to avoid building data centers even if it’s more expensive, to a way to provide global, elastic, reach to critical applications. There are things we could not do without the cloud, and other things that we should never consider doing there. There is no simple answer to the critical question of the cloud’s future, neither a highly positive or highly negative one. But there is an answer if we look hard, and objectively, at the question.

Summarizing MWC

MWC is almost over, and it’s been a mixture of the “Old Telco” and the “New Cloud.” The combination could have possibilities, but I think the show was almost two shows because the combination was either absent or at least trivialized. I blogged yesterday about the GSMA API initiative, and that’s only one example of the fact that, as the song goes, “Two different worlds, we live in two different worlds….”

According to Omdia as quoted HERE, “5G has disappointed pretty much everybody—service providers and consumers, and it has failed to excite businesses.” I wonder how much any of these groups really expected from 5G, frankly. What I think we had, in the end, was the only budgeted telco infrastructure initiative, and you either believed in it as a vendor or telco, or you started looking for another job. Well, reality is. But the point here is that while 5G has failed to live up to the hype, it’s going to take time (and maybe 6G) to move on from it. Meanwhile, the telco community is trapped in a fading delusion.

Also meanwhile, players who haven’t traditionally figured in MWC have shown up and are making waves. It’s not that Intel or Microsoft or Google or Amazon were pushing a story that was totally disconnected from 5G, but that they were trying to influence the whole “moving on” thing. Don’t fall into the “G” trap yet again, dear telcos, lift your eyes from the traditional muck at your feet and see the sky!” For those who find this satire a bit too obscure, I mean that these players are putting one foot in the “hosting and API” piece of 5G, and just hinting a bit at where that might take telcos if they elevated their planning out of the realm of simple connection.

The GSMA APIs that were announced are trite, proven failures. The concept of APIs, the promise of APIs, is perhaps the only way that operators can move on. APIs mean software components assembled to create features. That means hosting, the cloud. Since operators have demonstrated that they really don’t want to deploy their own “carrier cloud”, the New Cloud players are looking at promoting hosted features and offering a place to host them.

If you look at the offerings that the Big Three cloud vendors made, you see that they’re far from futuristic in service terms. I’ve touted my notion of the “digital-twin metaverse”, and there’s nothing that even approaches that in terms of defining what might be a broad and credible service model that adds to the top-end revenue pie because users want it bad enough to pay. Instead, what we have is a series of things that add up to “take a baby step in this direction, and eventually you’ll See the Light.”

In a way, this makes a lot of sense for all concerned, because telcos wouldn’t have let themselves become so mired in delusional 5G if they weren’t deathly afraid of anything that wasn’t an evolution of things familiar to them. Facilitating services assembled from APIs? Give them a Valium! A few harmless, familiar, OSS/BSS services exposed as APIs? Yeah, just a brief nap will let telcos recover from that. Add in the “You don’t need your own cloud” and you have something that might even raise a cautious smile.

So what’s the problem? Isn’t the MWC story proof that cloud and data center vendors are leading telcos out of the dark ages? No, just that they’re trying to lead them away from a 6G rehash of 5G disappointment. Why? Because 6G won’t come along for three or four years, and nobody wants to wait that long for a revenue kicker in these challenging economic times. We could easily see the “G” delusion replaced by the “API” delusion, a different but familiar telco dodge-change maneuver.

There are some signs, even at MWC, that at least some players may recognize that something real and different needs to be done. Evolution is good if it leads you to a state better than the one you’re in. Revolution is faster and may, in the end, be less painful, so who might be the most capable of creating a revolution? It would have to be the cloud providers. Of the cloud providers, the one I think could be the most influential in breaking the “G” deadlock is Google. While Google didn’t announce any specific new service/benefit target either, you could infer that their GDC Edge and Nephio stories could be designed to evolve into a platform for that next big thing. It’s harder to pick out a path to that happy destination in either Amazon’s or Microsoft’s story, though both might also have something in mind.

Among the network vendors, I’m watching Nokia here because they may well be the only player at the whole show that actually sees how different the future of telecom must be. But look at all the qualifiers I’ve had to use regarding Nokia. They are committed to transformation, and they’re a player whose business model is clearly threatened by the “G” delusion. I think they’ve shown that they realize that, but do they have a specific plan, a target approach, a new benefit model in mind? That’s hard to say.

The risk that the cloud provider approach of one-foot-in-the-G might present is that it might simply facilitate telcos’ putting both feet there. APIs and hosted carrier cloud are still a general strategy, one that could be seen by operators as enough progress and enough discomfort. What gets them to pull the anchor foot out and step forward instead? A clear promise, and that is not being offered by anyone at this point, not Google or even Nokia. We’ll watch for signs, though, and with hope.

Will the GSMA API Program Help Telcos Raise Profits?

What “interfaces” are to hardware, APIs are to software. Thus, any transformation in network services that’s based on adding features to connectivity is very likely to rely on APIs. But this new mission for APIs is hardly the first mission. Operators have long exposed connection services through APIs, and APIs are the basis for application connections over traditional services. With MWC just starting, it’s not a surprise that API-related announcements are coming out. The question is whether they’re pushing us toward the new API missions, or taking root in the old.

According to one announcement from the GSMA, 21 operators have joined forces to define an Open Gateway initiative, which is a set of open-source APIs that provide access to features of a mobile network. These APIs, unlike those that have been offered in the past, expose more OSS/BSS than network features, including identity verification and billing. The article notes that eight services are initially targeted; “ SIM swap (eSIMs to change carriers more easily; “quality on demand”; device status (to let users know if they are connected to a home or roaming network); number verify; edge site selection and routing; number verification (SMS 2FA); carrier billing or check out; and device location (when a service needs a location verified).”

On the surface, this looks very conventional in that it targets things already provided by operators rather than seeming to present new features or capabilities. There’s also been plenty of talk in the past about operator revenue opportunities associated with exposing OSS/BSS features, and none of it has generated more than a pimple on the operators’ bottom lines. Despite that conventional bias, something good might come out of this.

The first potential good thing is the federation concept I blogged about earlier. The GSMA is playing the role of an API harmonizer here, and it’s also presenting its offerings in open-source form, meaning other operators would be free to adopt it. One of the challenges operators face in “higher-layer” services is a lack of consistency of features and interfaces when such a service has a footprint larger than that of a single operator.

The second potential good thing is that you could characterize these targeted features as “facilitating services” of the kind that AT&T says it’s interested in offering. AT&T is one of the operators who signed on to the initiative, which I think means that they’re seeing this link to facilitation too. It’s long been my view that the best way for operators go gain profits from higher-layer services is through offering mid-layer features that would make those services easier for OTTs to develop and deploy.

The third potential good thing is the most general, the most potentially important, and the most difficult to assess in terms of real operator commitments. It may be that this initiative represents a kind of capitulation by operators, an admission that connection services can’t pay off and that in order to make higher-level services pay off, you need an application/mission ecosystem to develop the framework in which they’re used, before the services themselves can be commercially successful. If operators are finally seeing that, then they may take realistic steps toward a new revenue model instead of trying to resurrect Alexander Graham Bell.

One must always take the bad with the good, of course, and the bad here is that the new APIs and services may be too low a set of apples. Yes, it makes sense to launch an initiative with service features that an operator can already deliver out of their software. However, what’s facile isn’t necessarily what’s valuable. An initiative that doesn’t deliver the desired incremental revenue or other benefits is not only crippled, it’s potentially destructive of further evolution of capabilities.

That’s particularly true of international bodies like the GSMA. Recall that when the NFV ISG was launched, it got off on the wrong foot and was never able to admit it and make the major adjustments that generating relevance would have required. High-intertia processes not only take a long time to put into place, they also take a long time to change. In the market for higher-level features, speed of offering and pace of response to market changes are both critical. That’s particularly true when some of the players most likely to avail themselves of these APIs are the cloud providers, and they have their own aspirations in the “facilitating services” space.

The challenge of mid-layer opportunities is that they can be attacked from below or from above. Cloud providers could build a “telco PaaS” set of web services that would be designed to let OTTs build services faster, cheaper, and better. In fact, most or even all the feature areas the new Open Gateway initiative wants to offer could be offered by cloud providers too. Given their well-known agility and the fact that they’re facing their own revenue growth pressure, cloud providers could well grab all the truly high-value facilitation for themselves, and leave the operators to handle the stuff that’s more trouble than it’s worth.

Telefonica, one of the operators in the initiative, announced it was expanding its relationship with Microsoft “to the area of programmable networks through APIs in the context of the GSMA Open Gateway initiative.” It’s not difficult to see how this might represent the classic “fox in the hen house” challenge, focusing Telefonica on delivering little pieces of value out of its OSS/BSS, but leaving Microsoft in control of the way that truly high-value stuff like a digital-twin metaverse, would be facilitated.

If, of course, Microsoft is really that much better at moving away from comfortable, incumbent, missions. The biggest problem operators have in any advanced services concept development may be their bias toward evolution. If you’re winning the game, you have less interest in changing the rules, and the rules need to change if operators are to restore their profit growth.

Saving profits for telcos globally would require telco advanced services revenues that total over $190 billion annually, according to my model. If telcos were to be the dominant providers of facilitating features, my model says that the total service revenues for all players involved would have to be about $350 billion. If telcos provides a smaller piece of the facilitating pie, the top-line revenues would have to be higher in order for them to meet their own revenue goals. Once total revenue goals hit roughly the $650 billion level, the benefits needed to justify that level of spending become difficult to achieve.

This may be the critical point that telcos and their vendors miss. It’s not enough to expose a few easy and limited-value OSS/BSS or connectivity services via APIs, you have to expose things valuable enough to induce OTTs to wholesale them. Otherwise, even major new revenue opportunities won’t contribute enough telco revenue to move the needle on declining profit per bit, and something more radical may be needed, even perhaps more radical than subsidization.

What’s Behind the “New Nokia”?

Nokia has launched a new strategy, including a new logo, and the latter has gotten more attention than the former. That’s too bad because Nokia’s numbers have been good, as opposed to those of some of its major competitors. Why would they want a “refreshed company strategy to deliver sustained long-term growth” at this point? Let’s dissect their comments and offer a few besides.

CEO Lundmark said “Today we share our updated company and technology strategy with a focus on unleashing the exponential potential of networks – pioneering a future where networks meet cloud. To signal this ambition we are refreshing our brand to reflect who we are today – a B2B technology innovation leader. This is Nokia, but not as the world has seen us before.” This is obviously an example of not-uncommon CEO platitudes, but they also offer what they say is acceleration across six pillar principles. They are 1) grow market share through technology leadership, 2) expand their enterprise business, 3) manage product activity to ensure leadership where they elect to compete, 4) exploit opportunities outside the mobile services sector, 5) implement new business models like as-a-service, and 6) turn sustainability into a competitive advantage.

On the surface, many of these pillars sound like platitudes too, but I think it would be a major mistake for Nokia to tout a rebranding so obviously and do a whimper instead of a bang. They surely know that, so they are in fact about to do something radical. Where among these six points might we find a hint of what it is.

The first and most obvious of the points is expanding their enterprise business. Rivals Cisco and Juniper have proved that the service and cloud provider space is under a lot of profit pressure, and thus can’t be relied upon to expand spending. If a buyer sector doesn’t spend, then sellers there can’t gain revenue and profits. Enterprise has been stronger for network vendors, and so it makes sense for Nokia to attack the space.

That “providers” are under considerable cost pressure is well-documented. EU operators have asked for subsidies from Big Tech, and Nokia rival Ericsson just commented that operators in the EU needed to consolidate if they wanted to improve their profitability. You can’t sell consumers network equipment, so if providers aren’t viable targets for revenue growth, then the enterprise is the only answer.

A focus on the enterprise means a refocusing of product strategies, which leads us to exploiting opportunities outside the mobile space. Media hype notwithstanding, there’s simply no significant opportunity for private wireless. It’s a pimple on Nokia’s bottom-line opportunity, but chasing it might be why Nokia realized that there are opportunities in enterprise networking that they would be able to address.

That will surely require implementing new business models. One specific technology point they make in their press release is that they’re strategy “details how networks will need to evolve to meet the demands of the metaverse era”, and it may be here that they signal why they believe a radical refocusing of their brand is essential. As always, we have to be careful of the “may be” qualifier.

Anyone who’s followed my view of the metaverse knows I believe it’s a general approach to modeling the real world through a combination of compute and network services, not a social-media-limited service. The thing that characterizes it is the need for a “digital twin” of some organized real-world system, to collect information to populate a model and then to use that model to make decisions about the system and exercise control over it. In short, it’s a shift of focus more toward the point of activity, something I’ve said is the next frontier of opportunity.

The obvious question is how this technology shift squares with the top point of expanding Nokia’s enterprise business. I think the answer to that is clear, and related in a sense to the whole private-5G craze that Nokia and others have been pushing. Real-time is edge-focused, and right now the “edge” is all the way out on the customer premises. Unlike 5G, which is really a public service that can perhaps be exploited in limited enterprise verticals, real-time edge is almost universal and almost exclusively premises-focused in the near term.

Nokia isn’t an incumbent in computing or networking for enterprises, which leaves them free to perhaps adopt a more radical approach. Cisco and Juniper both address edge computing in both a product and positioning way, but neither is big on digital twinning or metaverse. If the next big thing in computing is real-time, digital-twin, metaverse, then Nokia might be able to stake out a claim for the space before any of the current incumbents are willing to be that radical. That could give them an immediate point of leverage with enterprises.

Any attempt by Nokia to jump into the enterprise space would pressure companies already established there, especially those like Cisco and Juniper who are switch/router vendors. Nokia’s fastest means of transitioning to a greater enterprise focus would be in product areas where the service providers were already consumers. Security would be more difficult for Nokia to address, unless they decided on M&A, and that could violate their principle of being a technology leader on their own.

I think the most interesting near-term question is what Nokia might decide to do in virtual networking. Their Nuage offering is one of the most mature virtual networking product sets, and one of the best. They expanded it to SD-WAN, and they offer a managed SD-WAN service, which plays to their goal of an as-a-service shift. Again, I believe virtual networks and NaaS is a frontier of networking. Might Nokia be more aggressive there too, and steal some opportunity from enterprise incumbents who have been shy about pushing the topic with enterprises?

There is a lot of potential here, and perhaps the best part is that the key issue of edge computing and digital twinning is one Nokia could also leverage in the service and cloud provider spaces. The enterprise is a good incubator for this opportunity, but its maturation would certainly transform both service provider and cloud. In other words, this could work.

It might also be a bad sign, of course. 5G isn’t what vendors hoped it would be, and even what it is has started to wind down. There are no major budgeted service provider initiatives on the horizon for at least four years. The cloud’s growth is slowing. It could be that Nokia sees the handwriting on the wall with respect to the provider side, and it could be that the conservative, glacial-paced, mindset of the service provider space has permeated Nokia management. It could be that they won’t be doing any of the aggressive things I’ve noted here, in which case they may be the leading edge of a major problem for vendors that depend on service provider opportunities. You can’t just say “enterprise” and win there, and until we know whether Nokia has anything tangible in mind, we won’t know whether they can succeed with their transformation. A logo isn’t enough, gang.

Why Operators Need Federation (and Why it May Be Too Late)

Mobile services rule, that’s a fact of life for the operators. For decades, they’ve been more profitable than wireline services, and increasingly they’re being used to make customer relationships sticky, pulling through wireline broadband in a competitive market. Comcast, for example, has long had an MVNO relationship with Verizon and uses its mobile offerings to buoy up its cable broadband. Recently, they’ve started to deploy their own 5G technology in high-traffic areas to reduce their MVNO costs, but that doesn’t address what’s perhaps the major mobile challenge. It’s the first service that operators have to compete in that requires major out-of-region presence. It won’t be the last.

One of the biggest challenges of “advanced” or “higher-layer” services is that few candidate services can be profitable with a limited service footprint. Business services, which is the segment most credible higher-layer services would address, are almost necessarily at least national in scope, often continental, and sometimes truly global. What do operators do to address this? There several options, and all of the pose challenges.

The simplest approach is the “segment-and-extend” model. All higher-layer services are “higher” in that they’re overlaid on connectivity. Operators typically have agreements to extend connectivity out of their home areas. For mobile services, they could use tower-sharing or MVNO relationships, for example, and for wireline they could simply have a resale agreement with other operators to provide connectivity, or even ride on the Internet. This connectivity could then extend the reach of the over-the-top elements, essentially backhauling the services’ higher-layer requests to one of their own hosting points.

The problem with this approach is that service quality may be difficult to ensure, and variable across the market geography. Not all operators have resale agreements with others, not all areas can be covered by these agreements, and the agreements are often for a limited term, to be renegotiated regularly. If the “data dialtone” of the Internet is used, there are obviously issues with QoS and security that would have to be addressed, though these could perhaps be mitigated at least in part by linking the higher-layer services to an SD-WAN.

A second possibility, one that can be seen as an expansion of our first, is federation, meaning the creation of a pan-provider mechanism for sharing service features in a specific wholesale/retail framework. This concept was actually attempted about 15 years ago with the IPsphere Forum (IPSF), and gathered significant international support at the time, but gradually lost focus through a combination of vendor wrangling and competing standards initiatives. A federation approach, like IPSF, would have to create both a mechanism for the exchange of service components and a mechanism for composing services from components. I participated extensively in the IPSF and I can testify that this is no easy task, but it is possible.

The Nephio initiative launched in 2022 might be a path toward a federation strategy based on Kubernetes, but it’s so far focused largely at the composing of services through software orchestration and (so far) not on standardizing a mechanism for creating and exchanging components among operators. I think it’s possible Nephio could be augmented or extended, but the process isn’t underway at the moment and I can’t say when and if it will be launched.

The biggest problem with federation is that it would require some form of formal coordination and cooperation, something like a standards body. In the telecom space, these bodies are common but their operational pace is usually glacial, which means that it would be difficult for a federation approach to be formalized in time to respond to market conditions. As a means of adding higher-layer services, given that there are no initiatives underway to do what’s necessary, I suspect federation would take too long.

The third option is the “facilitating services” option, espoused by AT&T for one. The idea here is to offer OTTs a wholesale service set that would allow them to build higher-layer services at a lower cost. The EU JV on advertising/identity services is an example of this. I like the facilitating services idea, but it’s taking a different slant on the problem, one that cedes higher-layer service primacy to the OTTs and thus limits operator profits and customer ownership.

The big advantage of facilitating services lies in this limitation, because operators in general are awful at being OTTs. Providing truly relevant facilitating services would let them dodge this issue and all the issues associated with the other two options. Each operator would be deploying facilitating services so there’s no need to coordinate, right? Well, maybe.

The problem with facilitating services overall is that OTTs have the same, or greater, need to deploy their services across a broad footprint. How do they do that if all the operators within that footprint don’t offer facilitating services, or choose to offer very different selections of services? Facilitating services are great where there’s a natural geographic boundary to what they’re intended to facilitate.

You can see where we’re heading here. For operators to be able to offer credible higher-level services across a broad geographic footprint, they’d need to work some bilateral deals with other operators for connectivity, or they’d need to ride their services on or over the Internet. That would make the operator an OTT, and if they want avoid that (which most surely do) then they have to look at facilitating services, with an eye to creating a set that’s intended to facilitate things within their own footprint, meaning without suffering from a lack of support from other operators.

Doomed, then? Well, there may be one more option. There are industry groups that are primarily aimed at OTTs, and even open-source projects. If an operator were to build a service strategy based on one or more of these, they might then enjoy a shot at a broader footprint by promoting the approach to other operators who, after all, have the same issues.

I offer as an example the Open Metaverse Foundation. I don’t think that this initiative is looking at the metaverse in as broad a way as they need to, but something like this could create a kind of “federation by proxy” or “commutative federation”. Things equal to the same thing are equal to each other, as they say in geometry or algebra, so if two operators base their strategy for either OTT or facilitating services on an open body like OMF, then those operators should be able to federate their services across their combined footprint.

Operators need to recognize that they let the federation issue lie fallow for too long, and so they need to figure out how to play catch-up now. The only viable approach appears to be jumping on the bandwagon of some open-source or industry group that’s filled with representatives who know how to make progress, and make it quickly. Otherwise, developments in the service and infrastructure markets are going to make it much harder for them to escape commoditization in the future.

Telco Capex, Infrastructure Technology Trends, and Vendor Opportunity

Like most analysts, I don’t often cite or praise the work of other analysts, but today I want to give a shout-out to Omdia for a figure they posted on LinkedIn, titled “Global telecoms capex flow, 2021”. It makes, in convenient graphics form, some of the points I’ve been raising in the evolution of telecommunications infrastructure and services, and it should serve as a warning to vendors not to take their eye off the ball.

Everyone knows that networks are aggregation hierarchies aimed at securing optimum economy of scale. At the edge or access end, the distribution of network users in the real world means that network infrastructure has to reach out to get traffic onboard. From there, traffic is concentrated inward to take advantage of optical capacity benefits. Access, aggregation, and core.

The Omdia chart shows the segments of operator capex, and while it doesn’t quantify the spending it does represent the categories proportional to their contribution. The fattest piece is the access network, which shouldn’t surprise anyone. The thinnest piece, smaller even than the devices-and-CPE piece, is the core, but that doesn’t mean that the core isn’t important, or isn’t a viable market target, particularly for new entrants.

The problem with the access network is that there’s so darn much of it that operators have to squeeze every cent of cost out of it or they risk being totally unprofitable. That means that capex is always under pressure there, and so is opex, because much of what I’ve always called “process opex” relating to actual network operations is related to customer care. “My network is broken if I say it is,” is a reasonable user mantra, and so it’s critical that everything humanly possible is done in the operations automation area to reduce the burden of finding problems and fixing them.

All of this tends to make access networking a fortress of incumbency. That’s particularly true in the wireless area, because even “open” initiatives like O-RAN don’t immediately convince the operators to adopt best-of-breed purchasing and do the necessary integration. In any case, it’s always difficult to introduce something new into a vast sea of old-ness without creating all manner of operations issues. That’s why 5G was an important opportunity for up-and-comings (one largely missed), and why the deployment of 5G New Radio over LTE EPC worked in favor of incumbents by accelerating early deployment, ahead of open-model specifications.

The beauty of the core network is that it does represent a pretty small piece of capex and opex, which means that if there’s a need to modernize and refresh core technology, it may be practical to do that by simply replacing the core network overall. There are millions of access devices in a network, but hundreds (or less) of core devices.

But even true beauty often has a blemish or zit here and there. With the core network, that blemish is the small contribution it makes to capex. If operators don’t spend much in a given space, then vendors in that space don’t make much. A success in the core here and there is going to quickly exhaust the total addressable market. A good play in the network core is a play that knows the core is a stepping-stone to something that has more capex associated with it. But, given operator reluctance to increase capex, what could that something be?

Cloud infrastructure, meaning server hosting resources, currently account for almost three times the capex as the network core. IT platform tools and software account for about double the capex of cloud infrastructure. And best of all, the access network that’s by far the biggest contributor to capex has one essential requirement, and that is to connect to all of this stuff…cloud, software, and core. That’s why I love the “metro” opportunity.

Traffic in the access network naturally moves into the core via some “on-ramp” technology. Every user can’t be efficiently connected to the core, so that on-ramp is the focus of aggregation within the access network, the collecting of traffic within a geographic area. Thus, this on-ramp point is both geographically linked and serves a concentrated amount of traffic and user connections. That makes it a great place to host things that are linked to geographies, which would include content delivery and IoT.

It’s also a great place to achieve economies of scale in hosting and storage. Go out further and you multiply the number of sites you’d need by several orders of magnitude, which means there would be no economy of scale and little chance of operational efficiency. Go deeper and there’s too much traffic to allow you to recover individual user characteristics and serve user needs, or to support real-time application hosting.

Where these on-ramp points are is of course a reasonable question. I think it’s one that was answered decades ago in the heyday of telephony. We had “edge offices” supported by “Class 5” switches. These offices were linked to “tandem” offices supported by “Class 4” switches, and those were located in what came to be called “Local Access and Transport Areas” or LATAs. We had about 250 such areas in the US, and that roughly corresponds to the number of metropolitan areas. Thus, a “metro” is, historically, the right place to jump on to a core network and to host incremental service features.

OK, topologically and geographically that all makes sense, but what about technologically and (most important) financially? There are three possible ways that “metro” could be supported. First, you could consider metro to be the inside edge of the access network. Second, you could consider it to be the outside edge of the core network. Finally, you could consider it an entirely new layer. Which option is best depends on perspective.

If I were a Nokia or Ericsson, I’d want to promote the metro to be a piece of the access network, because I’d be an access-network incumbent. Favoring this view is the fact that 5G specifications call for feature hosting, which means that hosting and “carrier cloud” are arguably requirements for 5G access networks (that’s a piece of the Omdia figure that’s almost three times the size of “core”, by the way).

If I were DriveNets, I’d want metro to be the edge of the core, because I’d be pushing core router sales. The DriveNets cluster router model fits well in the metro space too, in part because you could use it to connect server pools locally was well as support aggregation outward and core connectivity inward.

If I were Juniper, I’d want metro to be a separate, new, space. Juniper has actually articulated this to a degree with its “Cloud Metro” announcement a couple years ago. This positioning would let Juniper ease its way into access capex via 5G (which they’ve been promoting anyway), and also support general carrier-cloud missions.

Each of these three strategies has a vendor most likely to benefit from adopting them. Which strategy, which vendor, will win is the question that might determine the future of network infrastructure and the future of the vendors involved. It’s going to be fun to watch.

An Attempt to Assess Section 230

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This text, part of Section 230 of 47 US Code, is often called “the 26 words that created the the Internet”. It’s this specific section that the US Supreme Court is being asked to examine in multiple cases. There are two questions arising from that. First, what will SCOTUS decide? Second, what should it decide. We can’t address the first, so we’ll have to focus on the second.

The Google case that’s already been argued is a narrow example of Section 230. The assertion here isn’t that Google is responsible for YouTube content, but that it’s responsible if it decides, by any means to promote specific content that turns out to be outside traditional constitutional protections. That raises what I think is the key point in all of this, which is that this shouldn’t be a question of “everything is protected” or “nothing is protected” under Section 230.

CNN’s view attempts balance, and at least lays out the issues. It also identifies a basic truth that reveals a basic untruth about the opening quote. These 26 words didn’t create the Internet, they created social media. Finally, it frames in an indirect way the question of whether social media is simply an extension of a real-world community or something different. That leads us into the slippery world of the First Amendment.

Freedom of Speech, which is what the First Amendment covers, doesn’t mean that anyone can say anything they want. The well-known limitation regarding yelling fire in a crowded theater is proof that the freedom doesn’t extend to areas where public safety is involved. Most also know that if you say or write something that is both untrue and harmful, it’s a form of defamation, and you might be sued for it. That means that exercising your freedom of speech and uttering a falsehood can’t interfere with someone else’s reputation or livelihood. There are also legal protections against speech that’s deemed “hate speech.” Free speech has limits, and those limits can be enforced.

Except, maybe, online, and that’s where the issue of whether social media is an extension of the real world comes in.

If Person A says something that’s criminally or civilly actionable, but yells it out in a vast wilderness, it’s unlikely they’d be held accountable even if someone overheard it. Similarly, saying the same thing in a small gathering wouldn’t likely be prosecuted unless they were uttering an invitation to join a criminal conspiracy or the “gathering” was one open to a wide range of people and ideas. Suppose you uttered a defamation to a reporter? Suppose you characterized an ethnicity or gender in a negative way in a group of people you didn’t know? It seems like many of the exceptions to free speech are exceptions that relate to the social context, and that’s why it’s important to decide what social media is.

You can create a social-media audience in a lot of ways, from a closed group where people are invite-only and where the topic is specifically identified ahead of time to a completely open audience like that theater someone could be charged for yelling “Fire” in. It’s not clear whether everyone who used social media would understand the scope and context into which their comments were introduced. That alone makes it difficult to say whether a given utterance should be considered “free speech.”

Then there’s anonymity. Do you know who is posting something, or do you just know who they say they are? Some platforms will allow you to use a “screen name” that doesn’t even purport to identify you, and I don’t think any popular platform actually requires solid proof of identity. Redress against the person who uttered something isn’t possible if you don’t know who they are.

Finally, there’s “propagation velocity”. Generally, people are more likely to get a serious penalty for libel than for slander, because the first of the two means the offending remark was published and the latter that it was spoken. Spoken stuff is gone quickly, published stuff endures as long as a copy exists. If there’s harm, it endures too.

Opponents of Section 230 believe that immunizing social-media companies from actions regarding what they publish, but don’t create themselves, has made the platforms a safe harbor for abuse of free speech. Supporters of the section believe that a social media forum is simply a virtual form of the crowd on the street corner, which orators have addressed from soap boxes since the dawn of our Constitution.

What’s right here? Let’s start by looking at what, IMHO, is clearly wrong. It would be wrong to say that a social media platform is responsible for everything that every person on it says. To me, that clearly steps across the boundary between Internet forums and the real world and applies a different set of rules to the former.

I also think it’s wrong to say that social media is responsible for policing the sharing of posts within a closed community that people join if they accept the community value set. To me, that steps across the line between such a community and a party where people discuss things among themselves. Same rules should apply to both.

What is right, then? I think that if somebody wants to share a post, that post has to be subject to special moderation if it is shared outside those closed communities. You can’t yell “Fire!” in a crowded theater, nor should you be able to in a crowded Facebook. Meta should require that any broadly shared post be subject to explicit screening.

It’s also right to require the same thing of posts that earn a social media recommendation. If a social-media player features a post, they’re committing some of their credibility to boost the post’s credibility, and they have to take ownership of that decision and accept the consequences of it. This is where Google’s search case comes into play IMHO. Prioritizing search results via an algorithm is an active decision that promotes the visibility of content, and I think that decision has consequences.

I also think it’s right to require place special screening requirements on any posts from sources that have not been authenticated as representing who they claim to be. That identity should be available to law enforcement or if required in discovery in civil defamation lawsuit. Social media may not be responsible if a user defames someone, but they should not offer the users a level of anonymity that’s not available in the real world.

Is there any chance the Supreme Court is going to do something like this? Many of the justices are of my own generation, so it’s unfair (I think) to assume they’re all Luddites. However, there’s no question that my own views are colored by my own technical bias and social experience, and there’s no question that in the end what’s going to matter here is what the law says, which I can’t judge as well as they can. Might the law not be up-to-date in an Internet world? Sure, but many people and organizations probably think that the law should be updated to represent their own views better. There’s no law at all if everyone gets to write their own, and if the law is at fault here, we need to address changing it formally, not claiming it doesn’t apply.

Looking at the Buyer Side of NaaS

One of the tech topics that seems hardest to track is “network-as-a-service” or NaaS. Like a lot of technologies, NaaS is subject to what we could kindly call “opportunistic redefinition”, meaning NaaS-washing. When that happens, definitions tend to get fuzzy because vendors broaden the definitions to ride the media wave. I wondered whether we might address this problem by starting from the other end, the demand or buyer side, so I’ve culled through six months of enterprise data on NaaS, and here’s what I found.

If we had to pick a NaaS definition or service model from the stories and vendor offerings, we’d likely pick “usage pricing” as that model. That’s not an unreasonable definition either, given that in cloud computing, the “as-a-service” model is often based on usage pricing. The first question we’d have to ask about NaaS is therefore whether a usage-priced service model is actually appealing to the market, meaning whether it supports a symbiotic buyer/seller paradigm.

Enterprises listed “cost savings” as their absolute number one benefit objective for NaaS. Nothing else even comes close, in fact. Of 112 enterprises in my analysis, every one put it at the top of the list. The presumption is that by offering network-as-a-service, an enterprise could lower costs by reducing average capacity needs, since NaaS would adapt to peak periods. Very logical.

The challenge is that if you ask service providers, 18 out of 19 who admitted they were “considering” a NaaS offering, said that their primary goal was to generate new service revenue. Let’s parse that combo. Here’s the buyer, who wants NaaS because it lowers his cost, and here’s the seller who offers NaaS so the buyer will spend more. If a chatbot offered that dazzling insight, we’d say it made a mistake.

The fact is that there are very few service features that a buyer would spend more on. Security, cited as the number two benefit of NaaS by 97 of the 112 enterprises, was the only example enterprises offered as a justification for a higher cost for NaaS capability, and only if NaaS security let them reduce other security spending by more than the NaaS service would increase their service spending. This shouldn’t be a surprise; enterprises have a fixed business case for network services, and so there’s no incentive to increase spending if you can’t increase the business case. There are few “new” network projects these days, so there’s little chance of a major new benefit coming along to justify a new cost.

What about security, then? The “benefit” enterprises think NaaS could bring is the benefit of traffic, user, and application compartmentalization, meaning a form of connection control. While this is the most-cited security benefit, it’s cited by only 43 of the enterprises. The majority don’t have a specific NaaS feature they think would enhance security, and that they believe could then result in lower security spending elsewhere. For that group, I think a NaaS security benefit could be realized only if the service provider could establish a credible NaaS feature connection. For the group of enterprises who think NaaS connection control could enhance security, there are still issues that would have to be addressed.

First and foremost, connection control benefits from NaaS derive mostly from virtual networking as a likely NaaS foundation. I think that virtual networking is indeed likely to be the foundation of a credible NaaS, but you can do virtual networking without usage pricing, which is still a feature most think is the primary feature of NaaS. We could tap off the security argument for NaaS simply by using SD-WAN and/or broader virtual-network features.

Not all SD-WAN implementations offer connection control, because not all are aware of user-to-application sessions and capable of deciding what’s allowed and what isn’t through policy management. Of over 40 offerings in SD-WAN, only about five have any features in that area. Of my 112 enterprises, only 32 could identify even one of those, which means that the connection control features of SD-WAN aren’t widely recognized. Going to the broader virtual-network space, 88 enterprises could identify an actual provider of virtual networks (VMware’s NSX is the most-recognized) but only 15 could identify any other option, and only 9 said they used virtual networks for connection control and segmentation outside the data center.

A general virtual-network technology, which is available from VMware but also from network vendors like Cisco, Juniper, and Nokia, is capable of what’s needed for security-driven connection control, but hardly anyone knows. SD-WAN is generally not capable of doing that, but few enterprises know specific implementations that can offer it. One thing this argues for, IMHO, is a unification of the virtual-network space, a combining of SD-WAN and broader (often data-center-multi-tenant-centric) virtual network models.

My enterprise contacts weren’t spontaneously committed to a model like that. Of the group of enterprises who saw connection control and virtual networks as linked (9 in the real virtual network group and another 8 in the SD-WAN group, for a total of 17), they saw virtual networks creating closed application or user-role communities to which both workers and applications/databases could be assigned. That’s at least close to the model Juniper supports with the SD-WAN stuff it acquired with 128 Technology.

That would then raise the question of whether such a super-virtual-network model is the real service opportunity, and thus whether “NaaS” has any incremental value. I think it does, or could, if we take a different slant on what “as-a-service” means. Rather than linking it with usage pricing, look at it as connectivity as a service. That would mean that the connection model would be more dynamic than would typically be the case with a virtual-network strategy alone.

Dynamic connection management could mean a lot of things, and be implemented in a lot of ways. You could envision “communities” to which both users and applications could bind, but rather than the binding being largely static, it might be controlled by policies set at the community level, and might even allow users with certain credentials to join and leave communities at will. You could envision policies that would even look at the membership, and expel people or applications if certain users/applications joined. You could even envision a community policy to disconnect everyone, in situations like a security problem.

The interesting thing is that while users would be looking for usage pricing of services to be useful only if it saves them money, all of my enterprises thought that this NaaS model would be useful in security and that they’d pay for it if it at least contained overall security costs. More than half would accept a higher cost including security, if security were “significantly” enhanced. Thus, dodging the usage pricing issue might actually give network operators a path to revenue growth out of NaaS. But this would mean some serious work marketing features, something operators are notoriously bad at. Perhaps it’s time for them to learn to do it better.

Cisco Comes out of “Follower” into “Fast”

Let’s start by looking what we’d like to learn from the Cisco earnings call held Wednesday. Yes, it’s nice to know how Cisco did, particularly relative to competitors like Juniper. Yes, it’s nice to know how they characterize their quarter and what guidance they offer. What’s nicer is relating the Cisco information to the conditions in the network market, and to do that we have to factor in the Two Great Truths about Cisco as a network competitor. The first is that Cisco is a sales monster, a company who knows how to exploit account control. The second is that Cisco is an admitted “fast follower” in terms of tech innovation, not a leader. Those are the things we’ll come back to in our analysis of their results.

OK, now to the details. Cisco exceeded guidance and expectations in both earnings and revenues, and issued better-than-expected guidance. Revenue was up 7%, which is impressive in what’s surely still a difficult macro environment. Cash flow was at a record high, and recurring revenues accounted for 44% of total revenues, which shows Cisco is managing the transition to subscriptions well. Software subscription revenue was up 15% in fact. All the market segments did well except the service provider space, which was off (Cisco says) because the providers were taking time to absorb the pent-up deliveries generated because of easing supply chains.

What was interesting about Cisco’s earnings call was that they were more “futuristic” than usual. Cisco’s calls have always been replete with stories about “execution”, meaning that they were really about sales effectiveness more than product suitability. On their most recent call, they talked about “web-scale cloud infrastructure”, “hybrid cloud”, and “IoT”. Combine that with their cited analyst forecasts that IT spending will increase mid-to-high single digits in 2023 and you have the foundation for aggressive guidance, which is what Cisco offered.

They also made some specific comments in the area of management, which they were careful to characterize as “cloud management” to make it inclusive of hosting platforms. Cloud-native full-stack visibility via ThousandEyes and AppDynamics, and they promised to bring AI/ML into the management story at a faster pace.

What I’m seeing here is those Two Great Truths playing in a different way. On the one hand, there is no question that Cisco’s sales prowess showed in the quarter. Over the last three months, enterprises have been telling me that a vendor who has account control and, in particular, intense sales presence in their companies has a better chance of getting more of the assigned budget and a better chance of getting the budget increases that would benefit them. Cisco played those cards very well, and that’s the big reason for their success in the quarter.

The second “fast-follower” truth is also playing out. Juniper, arguably Cisco’s arch-rival in network equipment, has been consistently better at Cisco at innovating. Their management strategy is better because of Mist. Their SD-WAN and virtual networking strategy is better because of 128 Technology. They had a decent quarter largely because of these technology innovations. Being a fast follower means not pushing a technology innovation until a competitor proves it’s really beneficial, then doing an aggressive run at the space to own it. That’s what I think Cisco’s call is signaling. It’s time to emphasize the “fast” piece of “fast follower” versus the “follower” piece. They’re coming after Juniper’s differentiators.

This is a smart play for Cisco, not only because it’s consistent with our second great truth, but because it plays off the first. If your customer is looking at a competitor because they’re offering something innovative, you can step in at the sales level if you have account control. Nobody in the space, Juniper included, can play account control like Cisco can, and that means that competitors like Juniper have to rely on something else to level the playing field. That something else is marketing.

Marketing is the invisible, ubiquitous, powerful, virtual salesperson. It can develop demand, it can frame the competitive landscape that will define the features that are important to the buyer, and it can grease the skids of financial approvals. In short, it can do much of what sales account control can do, and it can do a few things in the early part of the opportunity-creating process that sales can’t address. Highly effective marketing can counter Cisco’s sales effectiveness, particularly if you combine it with technology innovation.

Juniper’s challenge, which I’ve blogged about for several years, is that they’ve underplayed their product assets. In their last earnings call, they talked more effectively about their technology innovations than they had in prior calls, and frankly more effectively than they talk about them on their own website. That’s let their own sales initiatives test the waters on the issues like cloud and AI and SD-WAN, without seizing ownership of the key features of the things they’ve innovated with.

You cannot out-sell Cisco, period. If you emphasize sales as your means of defending the issues that drive purchasing, then you surrender the field to the acknowledged sales elephant, which is Cisco. You can defeat that elephant only through incredibly aggressive marketing. It’s no accident that Cisco’s earnings call seems almost a counterpoint to Juniper’s call, a positioning of Cisco as the real power behind the innovative spaces that Juniper has exposed to the market. They are coming, as I’ve said, and coming hard.

The answer to that? The only one is that incredibly aggressive marketing. Cisco is signaling that it’s going to follow, perhaps only implicitly, their historical approach of the “five-phase-strategy”. A competitor comes up with something good. Cisco announces a five-phase plan that makes that something into a piece of the grand and glorious whole, a super-concept that Cisco at the time of their announcement is already in phase two of. The only defense against that approach is to define the features of the space, create the columns of the product comparisons, before Cisco can do that.

Cisco’s call is a clear sign that it, and the market overall, are entering a new phase in networking. Cisco is saying that they recognize that the feature drivers of networking are changing and that they now have to demonstrate their competence, nay their leadership, in those new spaces. If they succeed, they’ll take a leadership position in that new-model networking and their sales account control will keep them in that position. If they don’t, if Juniper or someone else defines all those feature points, then Cisco’s upcoming quarterly results may be harder to sing about.

The Hype and Hope of Open RAN

Is “Open RAN” in 5G something we should welcome, or be afraid of? Is it all a part of the 5G hype, is it its own hype category, or is it not hype at all? What’s its relevance to telecom infrastructure overall? All of these are important questions, so let’s try to answer them.

“Open RAN” is generally applied to initiatives aimed at creating open-model 5G RAN technology, but you may hear more about the O-RAN Alliance, the dominant play in the space. The goal of the initiative is to create the broadest possible open specification for RAN elements so that a complete implementation can be defined. 3GPP RAN specs leave some components of the RAN (technically, 5G New Radio) opaque and Open RAN defines specifications and decomposition models for these. This permits best-of-breed element selection, and also broadens potential market entrants.

The latter is arguably the primary goal of Open RAN. Without it, the major mobile infrastructure vendors (Ericsson, Huawei, and Nokia) would likely lock up deals. That’s because those opaque components of the 3GPP spec would be implemented in different ways by smaller vendors and that would risk lock-in to a small and unfamiliar player. Needless to say, the major mobile infrastructure vendors are of two minds about this. On one hand, buyers fearing vendor lock-in could have their fears mitigated by Open RAN conformance. That could accelerate adoption. On the other hand, admitting others into the market isn’t exactly a smart competitive move for the three giants. A recent Light Reading story quoting Ericsson suggests that by the end of the decade, Open RAN could account for a fifth of the RAN market sales.

This answers our first question. If you’re one of the Big Three of RAN, Open RAN is a mixed blessing, but I think most would privately agree that they’re negative on the concept, forced to support it by competitive pressure. If you’re anyone else in either the operator or mobile infrastructure world, it’s a blessing. But what about its hype status?

5G RAN is already widely deploying, but if Ericsson is correct regarding the impact, then it will have a minimal impact on 5G in the near term. And is Ericsson correct? My own model suggests that the peak penetration of Open RAN depends on what you mean by it. Both Nokia and Ericsson have committed to Open RAN or convergence of their products with the spec, so if that’s counted as an Open RAN deployment, then I think we’ll see Open RAN hit 20% penetration some time in 2025.

But remember the idea was to create a truly best-of-breed model for RAN. If we assume that Open RAN penetration means the number of RAN implementations that actually take advantage of the Open RAN model to support multiple vendors, then I think it’s doubtful that we’ll ever hit that 20%. My model isn’t accurate as far out as 2030, but it seems to be plateauing at about 17% and wouldn’t even approach that until 2028.

But even if Open RAN has significant penetration, can we say it’s not hyped? As it happens, the issues with Open RAN hype may well be connected with the issue of 5G hype. It’s not that 5G is being exaggerated in deployment terms; it already dominates all the major markets. The problem is that 5G is usually characterized as a major new source of operator revenues, and since I doubt that it will be, the claim qualifies as hype in my book. I’ve blogged plenty on that, so feel free to look back if you want my reasons. So what would, could, should the role of Open RAN be in those new service revenue opportunities? That’s what decides whether Open RAN is hyped.

I’ve rejected the notion that just having 5G Core with network slicing was going to have significant impact on the mobile market. If nothing else can have such an impact, then you could argue that Open RAN isn’t very relevant except to mobile-infrastructure geeks. What would create a non-slice impact? Edge computing.

The big innovation in 5G, from an infrastructure and openness perspective, is the use of hosted elements rather than a fixed set of static appliances. Obviously hosted elements need hosts to run on, so presumably any 5G implementation would promote hosting. Open RAN defines more elements that rely on hosting, so it would promote more hosts. OK, that’s true, but there’s big “however”. Ericsson’s point in the article is that Open RAN is in fact effectively edge/cloud RAN, and that this model raises serious questions about the handling of the data plane.

Another 5G innovation is the separation of the control and “user” planes (CUPS). Functionally, the 5G UP is very much like an IP network, but it has a collateral role in CUPS, because there’s a link between the RAN implementation and the Core; think I-UPF (in MEC) and UPF (C-RAN) and also between the slice management (which is 5G specific) and the Core. Mobility management and slice management impact the UPF flow, which means that some UPF features would be “hosted”. That implies that the data plane of a mobile network would be hosted. You can implement a router as a hosted software instance, but it’s not likely to be the fastest and best option, which was Ericsson’s argument.

Solution-wise, the right answer would be to have a “router” that was a real, optimal, IP data device in all respects, but that would support some means of offering the things that mobile infrastructure needs, which is the GPRS tunneling and its control. Router plus 5G UPF-specific features equals 5G UPF.

The next-best approach would be to host the UPF features on an edge/core pool of servers with specialized chips to optimize packet movement. Intel’s x86 model is far from the only game in town even today; the article cites ARM, Marvell, Nvidia and Qualcomm as examples of other chips in use, and Broadcom offers its chips for white-box routers (DriveNets uses them) so they’d clearly be suitable. However, the use of a specialized resource pool could compromise the value of 5G as a means of driving early edge resource growth. Unless the edge applications needed the same special data-plane expediting or at least had another use for the special chips, the chip enhancements might make the edge resource pool too expensive for general use.

The solution to the general-edge-resource-pool problem is to use general x86 chips. As the article pointed out, Intel has taken the position that competition in the general-purpose computing chip space is high, and economies of scale in production are good. The former means that performance of these chips is likely to improve, and the latter that chip costs will be as low as they’re likely to be under any option. If we assumed that Open RAN penetration rates were modest over the decade, then we could assume that by the time there was a lot of interest in deploying hosted UPFs, the x86, AMD, and ARM options would likely work in most UPF missions.

You can see the problem here. Since Open RAN hasn’t taken off as fast as many had expected (or hoped), it’s not advanced edge computing much at all, and if data-plane performance encourages implementations that require special silicon, then it won’t promote a pool of general edge resources. Not only that, the requirement might well dilute the benefits of an open specification, since only a specialized implementation would be competitive with the Big Three vendors.

I’m a big supporter of open standards, and Open RAN, but I’m starting to wonder whether the market is outrunning the value proposition here. It may be necessary for the O-RAN Alliance to start looking at the specific question of suitable hardware for the UPF elements if we’re going to see Open RAN deliver what everyone hoped it would, which was an on-ramp to edge computing.