What’s Behind the “New Nokia”?

Nokia has launched a new strategy, including a new logo, and the latter has gotten more attention than the former. That’s too bad because Nokia’s numbers have been good, as opposed to those of some of its major competitors. Why would they want a “refreshed company strategy to deliver sustained long-term growth” at this point? Let’s dissect their comments and offer a few besides.

CEO Lundmark said “Today we share our updated company and technology strategy with a focus on unleashing the exponential potential of networks – pioneering a future where networks meet cloud. To signal this ambition we are refreshing our brand to reflect who we are today – a B2B technology innovation leader. This is Nokia, but not as the world has seen us before.” This is obviously an example of not-uncommon CEO platitudes, but they also offer what they say is acceleration across six pillar principles. They are 1) grow market share through technology leadership, 2) expand their enterprise business, 3) manage product activity to ensure leadership where they elect to compete, 4) exploit opportunities outside the mobile services sector, 5) implement new business models like as-a-service, and 6) turn sustainability into a competitive advantage.

On the surface, many of these pillars sound like platitudes too, but I think it would be a major mistake for Nokia to tout a rebranding so obviously and do a whimper instead of a bang. They surely know that, so they are in fact about to do something radical. Where among these six points might we find a hint of what it is.

The first and most obvious of the points is expanding their enterprise business. Rivals Cisco and Juniper have proved that the service and cloud provider space is under a lot of profit pressure, and thus can’t be relied upon to expand spending. If a buyer sector doesn’t spend, then sellers there can’t gain revenue and profits. Enterprise has been stronger for network vendors, and so it makes sense for Nokia to attack the space.

That “providers” are under considerable cost pressure is well-documented. EU operators have asked for subsidies from Big Tech, and Nokia rival Ericsson just commented that operators in the EU needed to consolidate if they wanted to improve their profitability. You can’t sell consumers network equipment, so if providers aren’t viable targets for revenue growth, then the enterprise is the only answer.

A focus on the enterprise means a refocusing of product strategies, which leads us to exploiting opportunities outside the mobile space. Media hype notwithstanding, there’s simply no significant opportunity for private wireless. It’s a pimple on Nokia’s bottom-line opportunity, but chasing it might be why Nokia realized that there are opportunities in enterprise networking that they would be able to address.

That will surely require implementing new business models. One specific technology point they make in their press release is that they’re strategy “details how networks will need to evolve to meet the demands of the metaverse era”, and it may be here that they signal why they believe a radical refocusing of their brand is essential. As always, we have to be careful of the “may be” qualifier.

Anyone who’s followed my view of the metaverse knows I believe it’s a general approach to modeling the real world through a combination of compute and network services, not a social-media-limited service. The thing that characterizes it is the need for a “digital twin” of some organized real-world system, to collect information to populate a model and then to use that model to make decisions about the system and exercise control over it. In short, it’s a shift of focus more toward the point of activity, something I’ve said is the next frontier of opportunity.

The obvious question is how this technology shift squares with the top point of expanding Nokia’s enterprise business. I think the answer to that is clear, and related in a sense to the whole private-5G craze that Nokia and others have been pushing. Real-time is edge-focused, and right now the “edge” is all the way out on the customer premises. Unlike 5G, which is really a public service that can perhaps be exploited in limited enterprise verticals, real-time edge is almost universal and almost exclusively premises-focused in the near term.

Nokia isn’t an incumbent in computing or networking for enterprises, which leaves them free to perhaps adopt a more radical approach. Cisco and Juniper both address edge computing in both a product and positioning way, but neither is big on digital twinning or metaverse. If the next big thing in computing is real-time, digital-twin, metaverse, then Nokia might be able to stake out a claim for the space before any of the current incumbents are willing to be that radical. That could give them an immediate point of leverage with enterprises.

Any attempt by Nokia to jump into the enterprise space would pressure companies already established there, especially those like Cisco and Juniper who are switch/router vendors. Nokia’s fastest means of transitioning to a greater enterprise focus would be in product areas where the service providers were already consumers. Security would be more difficult for Nokia to address, unless they decided on M&A, and that could violate their principle of being a technology leader on their own.

I think the most interesting near-term question is what Nokia might decide to do in virtual networking. Their Nuage offering is one of the most mature virtual networking product sets, and one of the best. They expanded it to SD-WAN, and they offer a managed SD-WAN service, which plays to their goal of an as-a-service shift. Again, I believe virtual networks and NaaS is a frontier of networking. Might Nokia be more aggressive there too, and steal some opportunity from enterprise incumbents who have been shy about pushing the topic with enterprises?

There is a lot of potential here, and perhaps the best part is that the key issue of edge computing and digital twinning is one Nokia could also leverage in the service and cloud provider spaces. The enterprise is a good incubator for this opportunity, but its maturation would certainly transform both service provider and cloud. In other words, this could work.

It might also be a bad sign, of course. 5G isn’t what vendors hoped it would be, and even what it is has started to wind down. There are no major budgeted service provider initiatives on the horizon for at least four years. The cloud’s growth is slowing. It could be that Nokia sees the handwriting on the wall with respect to the provider side, and it could be that the conservative, glacial-paced, mindset of the service provider space has permeated Nokia management. It could be that they won’t be doing any of the aggressive things I’ve noted here, in which case they may be the leading edge of a major problem for vendors that depend on service provider opportunities. You can’t just say “enterprise” and win there, and until we know whether Nokia has anything tangible in mind, we won’t know whether they can succeed with their transformation. A logo isn’t enough, gang.

Why Operators Need Federation (and Why it May Be Too Late)

Mobile services rule, that’s a fact of life for the operators. For decades, they’ve been more profitable than wireline services, and increasingly they’re being used to make customer relationships sticky, pulling through wireline broadband in a competitive market. Comcast, for example, has long had an MVNO relationship with Verizon and uses its mobile offerings to buoy up its cable broadband. Recently, they’ve started to deploy their own 5G technology in high-traffic areas to reduce their MVNO costs, but that doesn’t address what’s perhaps the major mobile challenge. It’s the first service that operators have to compete in that requires major out-of-region presence. It won’t be the last.

One of the biggest challenges of “advanced” or “higher-layer” services is that few candidate services can be profitable with a limited service footprint. Business services, which is the segment most credible higher-layer services would address, are almost necessarily at least national in scope, often continental, and sometimes truly global. What do operators do to address this? There several options, and all of the pose challenges.

The simplest approach is the “segment-and-extend” model. All higher-layer services are “higher” in that they’re overlaid on connectivity. Operators typically have agreements to extend connectivity out of their home areas. For mobile services, they could use tower-sharing or MVNO relationships, for example, and for wireline they could simply have a resale agreement with other operators to provide connectivity, or even ride on the Internet. This connectivity could then extend the reach of the over-the-top elements, essentially backhauling the services’ higher-layer requests to one of their own hosting points.

The problem with this approach is that service quality may be difficult to ensure, and variable across the market geography. Not all operators have resale agreements with others, not all areas can be covered by these agreements, and the agreements are often for a limited term, to be renegotiated regularly. If the “data dialtone” of the Internet is used, there are obviously issues with QoS and security that would have to be addressed, though these could perhaps be mitigated at least in part by linking the higher-layer services to an SD-WAN.

A second possibility, one that can be seen as an expansion of our first, is federation, meaning the creation of a pan-provider mechanism for sharing service features in a specific wholesale/retail framework. This concept was actually attempted about 15 years ago with the IPsphere Forum (IPSF), and gathered significant international support at the time, but gradually lost focus through a combination of vendor wrangling and competing standards initiatives. A federation approach, like IPSF, would have to create both a mechanism for the exchange of service components and a mechanism for composing services from components. I participated extensively in the IPSF and I can testify that this is no easy task, but it is possible.

The Nephio initiative launched in 2022 might be a path toward a federation strategy based on Kubernetes, but it’s so far focused largely at the composing of services through software orchestration and (so far) not on standardizing a mechanism for creating and exchanging components among operators. I think it’s possible Nephio could be augmented or extended, but the process isn’t underway at the moment and I can’t say when and if it will be launched.

The biggest problem with federation is that it would require some form of formal coordination and cooperation, something like a standards body. In the telecom space, these bodies are common but their operational pace is usually glacial, which means that it would be difficult for a federation approach to be formalized in time to respond to market conditions. As a means of adding higher-layer services, given that there are no initiatives underway to do what’s necessary, I suspect federation would take too long.

The third option is the “facilitating services” option, espoused by AT&T for one. The idea here is to offer OTTs a wholesale service set that would allow them to build higher-layer services at a lower cost. The EU JV on advertising/identity services is an example of this. I like the facilitating services idea, but it’s taking a different slant on the problem, one that cedes higher-layer service primacy to the OTTs and thus limits operator profits and customer ownership.

The big advantage of facilitating services lies in this limitation, because operators in general are awful at being OTTs. Providing truly relevant facilitating services would let them dodge this issue and all the issues associated with the other two options. Each operator would be deploying facilitating services so there’s no need to coordinate, right? Well, maybe.

The problem with facilitating services overall is that OTTs have the same, or greater, need to deploy their services across a broad footprint. How do they do that if all the operators within that footprint don’t offer facilitating services, or choose to offer very different selections of services? Facilitating services are great where there’s a natural geographic boundary to what they’re intended to facilitate.

You can see where we’re heading here. For operators to be able to offer credible higher-level services across a broad geographic footprint, they’d need to work some bilateral deals with other operators for connectivity, or they’d need to ride their services on or over the Internet. That would make the operator an OTT, and if they want avoid that (which most surely do) then they have to look at facilitating services, with an eye to creating a set that’s intended to facilitate things within their own footprint, meaning without suffering from a lack of support from other operators.

Doomed, then? Well, there may be one more option. There are industry groups that are primarily aimed at OTTs, and even open-source projects. If an operator were to build a service strategy based on one or more of these, they might then enjoy a shot at a broader footprint by promoting the approach to other operators who, after all, have the same issues.

I offer as an example the Open Metaverse Foundation. I don’t think that this initiative is looking at the metaverse in as broad a way as they need to, but something like this could create a kind of “federation by proxy” or “commutative federation”. Things equal to the same thing are equal to each other, as they say in geometry or algebra, so if two operators base their strategy for either OTT or facilitating services on an open body like OMF, then those operators should be able to federate their services across their combined footprint.

Operators need to recognize that they let the federation issue lie fallow for too long, and so they need to figure out how to play catch-up now. The only viable approach appears to be jumping on the bandwagon of some open-source or industry group that’s filled with representatives who know how to make progress, and make it quickly. Otherwise, developments in the service and infrastructure markets are going to make it much harder for them to escape commoditization in the future.

Telco Capex, Infrastructure Technology Trends, and Vendor Opportunity

Like most analysts, I don’t often cite or praise the work of other analysts, but today I want to give a shout-out to Omdia for a figure they posted on LinkedIn, titled “Global telecoms capex flow, 2021”. It makes, in convenient graphics form, some of the points I’ve been raising in the evolution of telecommunications infrastructure and services, and it should serve as a warning to vendors not to take their eye off the ball.

Everyone knows that networks are aggregation hierarchies aimed at securing optimum economy of scale. At the edge or access end, the distribution of network users in the real world means that network infrastructure has to reach out to get traffic onboard. From there, traffic is concentrated inward to take advantage of optical capacity benefits. Access, aggregation, and core.

The Omdia chart shows the segments of operator capex, and while it doesn’t quantify the spending it does represent the categories proportional to their contribution. The fattest piece is the access network, which shouldn’t surprise anyone. The thinnest piece, smaller even than the devices-and-CPE piece, is the core, but that doesn’t mean that the core isn’t important, or isn’t a viable market target, particularly for new entrants.

The problem with the access network is that there’s so darn much of it that operators have to squeeze every cent of cost out of it or they risk being totally unprofitable. That means that capex is always under pressure there, and so is opex, because much of what I’ve always called “process opex” relating to actual network operations is related to customer care. “My network is broken if I say it is,” is a reasonable user mantra, and so it’s critical that everything humanly possible is done in the operations automation area to reduce the burden of finding problems and fixing them.

All of this tends to make access networking a fortress of incumbency. That’s particularly true in the wireless area, because even “open” initiatives like O-RAN don’t immediately convince the operators to adopt best-of-breed purchasing and do the necessary integration. In any case, it’s always difficult to introduce something new into a vast sea of old-ness without creating all manner of operations issues. That’s why 5G was an important opportunity for up-and-comings (one largely missed), and why the deployment of 5G New Radio over LTE EPC worked in favor of incumbents by accelerating early deployment, ahead of open-model specifications.

The beauty of the core network is that it does represent a pretty small piece of capex and opex, which means that if there’s a need to modernize and refresh core technology, it may be practical to do that by simply replacing the core network overall. There are millions of access devices in a network, but hundreds (or less) of core devices.

But even true beauty often has a blemish or zit here and there. With the core network, that blemish is the small contribution it makes to capex. If operators don’t spend much in a given space, then vendors in that space don’t make much. A success in the core here and there is going to quickly exhaust the total addressable market. A good play in the network core is a play that knows the core is a stepping-stone to something that has more capex associated with it. But, given operator reluctance to increase capex, what could that something be?

Cloud infrastructure, meaning server hosting resources, currently account for almost three times the capex as the network core. IT platform tools and software account for about double the capex of cloud infrastructure. And best of all, the access network that’s by far the biggest contributor to capex has one essential requirement, and that is to connect to all of this stuff…cloud, software, and core. That’s why I love the “metro” opportunity.

Traffic in the access network naturally moves into the core via some “on-ramp” technology. Every user can’t be efficiently connected to the core, so that on-ramp is the focus of aggregation within the access network, the collecting of traffic within a geographic area. Thus, this on-ramp point is both geographically linked and serves a concentrated amount of traffic and user connections. That makes it a great place to host things that are linked to geographies, which would include content delivery and IoT.

It’s also a great place to achieve economies of scale in hosting and storage. Go out further and you multiply the number of sites you’d need by several orders of magnitude, which means there would be no economy of scale and little chance of operational efficiency. Go deeper and there’s too much traffic to allow you to recover individual user characteristics and serve user needs, or to support real-time application hosting.

Where these on-ramp points are is of course a reasonable question. I think it’s one that was answered decades ago in the heyday of telephony. We had “edge offices” supported by “Class 5” switches. These offices were linked to “tandem” offices supported by “Class 4” switches, and those were located in what came to be called “Local Access and Transport Areas” or LATAs. We had about 250 such areas in the US, and that roughly corresponds to the number of metropolitan areas. Thus, a “metro” is, historically, the right place to jump on to a core network and to host incremental service features.

OK, topologically and geographically that all makes sense, but what about technologically and (most important) financially? There are three possible ways that “metro” could be supported. First, you could consider metro to be the inside edge of the access network. Second, you could consider it to be the outside edge of the core network. Finally, you could consider it an entirely new layer. Which option is best depends on perspective.

If I were a Nokia or Ericsson, I’d want to promote the metro to be a piece of the access network, because I’d be an access-network incumbent. Favoring this view is the fact that 5G specifications call for feature hosting, which means that hosting and “carrier cloud” are arguably requirements for 5G access networks (that’s a piece of the Omdia figure that’s almost three times the size of “core”, by the way).

If I were DriveNets, I’d want metro to be the edge of the core, because I’d be pushing core router sales. The DriveNets cluster router model fits well in the metro space too, in part because you could use it to connect server pools locally was well as support aggregation outward and core connectivity inward.

If I were Juniper, I’d want metro to be a separate, new, space. Juniper has actually articulated this to a degree with its “Cloud Metro” announcement a couple years ago. This positioning would let Juniper ease its way into access capex via 5G (which they’ve been promoting anyway), and also support general carrier-cloud missions.

Each of these three strategies has a vendor most likely to benefit from adopting them. Which strategy, which vendor, will win is the question that might determine the future of network infrastructure and the future of the vendors involved. It’s going to be fun to watch.

An Attempt to Assess Section 230

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This text, part of Section 230 of 47 US Code, is often called “the 26 words that created the the Internet”. It’s this specific section that the US Supreme Court is being asked to examine in multiple cases. There are two questions arising from that. First, what will SCOTUS decide? Second, what should it decide. We can’t address the first, so we’ll have to focus on the second.

The Google case that’s already been argued is a narrow example of Section 230. The assertion here isn’t that Google is responsible for YouTube content, but that it’s responsible if it decides, by any means to promote specific content that turns out to be outside traditional constitutional protections. That raises what I think is the key point in all of this, which is that this shouldn’t be a question of “everything is protected” or “nothing is protected” under Section 230.

CNN’s view attempts balance, and at least lays out the issues. It also identifies a basic truth that reveals a basic untruth about the opening quote. These 26 words didn’t create the Internet, they created social media. Finally, it frames in an indirect way the question of whether social media is simply an extension of a real-world community or something different. That leads us into the slippery world of the First Amendment.

Freedom of Speech, which is what the First Amendment covers, doesn’t mean that anyone can say anything they want. The well-known limitation regarding yelling fire in a crowded theater is proof that the freedom doesn’t extend to areas where public safety is involved. Most also know that if you say or write something that is both untrue and harmful, it’s a form of defamation, and you might be sued for it. That means that exercising your freedom of speech and uttering a falsehood can’t interfere with someone else’s reputation or livelihood. There are also legal protections against speech that’s deemed “hate speech.” Free speech has limits, and those limits can be enforced.

Except, maybe, online, and that’s where the issue of whether social media is an extension of the real world comes in.

If Person A says something that’s criminally or civilly actionable, but yells it out in a vast wilderness, it’s unlikely they’d be held accountable even if someone overheard it. Similarly, saying the same thing in a small gathering wouldn’t likely be prosecuted unless they were uttering an invitation to join a criminal conspiracy or the “gathering” was one open to a wide range of people and ideas. Suppose you uttered a defamation to a reporter? Suppose you characterized an ethnicity or gender in a negative way in a group of people you didn’t know? It seems like many of the exceptions to free speech are exceptions that relate to the social context, and that’s why it’s important to decide what social media is.

You can create a social-media audience in a lot of ways, from a closed group where people are invite-only and where the topic is specifically identified ahead of time to a completely open audience like that theater someone could be charged for yelling “Fire” in. It’s not clear whether everyone who used social media would understand the scope and context into which their comments were introduced. That alone makes it difficult to say whether a given utterance should be considered “free speech.”

Then there’s anonymity. Do you know who is posting something, or do you just know who they say they are? Some platforms will allow you to use a “screen name” that doesn’t even purport to identify you, and I don’t think any popular platform actually requires solid proof of identity. Redress against the person who uttered something isn’t possible if you don’t know who they are.

Finally, there’s “propagation velocity”. Generally, people are more likely to get a serious penalty for libel than for slander, because the first of the two means the offending remark was published and the latter that it was spoken. Spoken stuff is gone quickly, published stuff endures as long as a copy exists. If there’s harm, it endures too.

Opponents of Section 230 believe that immunizing social-media companies from actions regarding what they publish, but don’t create themselves, has made the platforms a safe harbor for abuse of free speech. Supporters of the section believe that a social media forum is simply a virtual form of the crowd on the street corner, which orators have addressed from soap boxes since the dawn of our Constitution.

What’s right here? Let’s start by looking at what, IMHO, is clearly wrong. It would be wrong to say that a social media platform is responsible for everything that every person on it says. To me, that clearly steps across the boundary between Internet forums and the real world and applies a different set of rules to the former.

I also think it’s wrong to say that social media is responsible for policing the sharing of posts within a closed community that people join if they accept the community value set. To me, that steps across the line between such a community and a party where people discuss things among themselves. Same rules should apply to both.

What is right, then? I think that if somebody wants to share a post, that post has to be subject to special moderation if it is shared outside those closed communities. You can’t yell “Fire!” in a crowded theater, nor should you be able to in a crowded Facebook. Meta should require that any broadly shared post be subject to explicit screening.

It’s also right to require the same thing of posts that earn a social media recommendation. If a social-media player features a post, they’re committing some of their credibility to boost the post’s credibility, and they have to take ownership of that decision and accept the consequences of it. This is where Google’s search case comes into play IMHO. Prioritizing search results via an algorithm is an active decision that promotes the visibility of content, and I think that decision has consequences.

I also think it’s right to require place special screening requirements on any posts from sources that have not been authenticated as representing who they claim to be. That identity should be available to law enforcement or if required in discovery in civil defamation lawsuit. Social media may not be responsible if a user defames someone, but they should not offer the users a level of anonymity that’s not available in the real world.

Is there any chance the Supreme Court is going to do something like this? Many of the justices are of my own generation, so it’s unfair (I think) to assume they’re all Luddites. However, there’s no question that my own views are colored by my own technical bias and social experience, and there’s no question that in the end what’s going to matter here is what the law says, which I can’t judge as well as they can. Might the law not be up-to-date in an Internet world? Sure, but many people and organizations probably think that the law should be updated to represent their own views better. There’s no law at all if everyone gets to write their own, and if the law is at fault here, we need to address changing it formally, not claiming it doesn’t apply.

Looking at the Buyer Side of NaaS

One of the tech topics that seems hardest to track is “network-as-a-service” or NaaS. Like a lot of technologies, NaaS is subject to what we could kindly call “opportunistic redefinition”, meaning NaaS-washing. When that happens, definitions tend to get fuzzy because vendors broaden the definitions to ride the media wave. I wondered whether we might address this problem by starting from the other end, the demand or buyer side, so I’ve culled through six months of enterprise data on NaaS, and here’s what I found.

If we had to pick a NaaS definition or service model from the stories and vendor offerings, we’d likely pick “usage pricing” as that model. That’s not an unreasonable definition either, given that in cloud computing, the “as-a-service” model is often based on usage pricing. The first question we’d have to ask about NaaS is therefore whether a usage-priced service model is actually appealing to the market, meaning whether it supports a symbiotic buyer/seller paradigm.

Enterprises listed “cost savings” as their absolute number one benefit objective for NaaS. Nothing else even comes close, in fact. Of 112 enterprises in my analysis, every one put it at the top of the list. The presumption is that by offering network-as-a-service, an enterprise could lower costs by reducing average capacity needs, since NaaS would adapt to peak periods. Very logical.

The challenge is that if you ask service providers, 18 out of 19 who admitted they were “considering” a NaaS offering, said that their primary goal was to generate new service revenue. Let’s parse that combo. Here’s the buyer, who wants NaaS because it lowers his cost, and here’s the seller who offers NaaS so the buyer will spend more. If a chatbot offered that dazzling insight, we’d say it made a mistake.

The fact is that there are very few service features that a buyer would spend more on. Security, cited as the number two benefit of NaaS by 97 of the 112 enterprises, was the only example enterprises offered as a justification for a higher cost for NaaS capability, and only if NaaS security let them reduce other security spending by more than the NaaS service would increase their service spending. This shouldn’t be a surprise; enterprises have a fixed business case for network services, and so there’s no incentive to increase spending if you can’t increase the business case. There are few “new” network projects these days, so there’s little chance of a major new benefit coming along to justify a new cost.

What about security, then? The “benefit” enterprises think NaaS could bring is the benefit of traffic, user, and application compartmentalization, meaning a form of connection control. While this is the most-cited security benefit, it’s cited by only 43 of the enterprises. The majority don’t have a specific NaaS feature they think would enhance security, and that they believe could then result in lower security spending elsewhere. For that group, I think a NaaS security benefit could be realized only if the service provider could establish a credible NaaS feature connection. For the group of enterprises who think NaaS connection control could enhance security, there are still issues that would have to be addressed.

First and foremost, connection control benefits from NaaS derive mostly from virtual networking as a likely NaaS foundation. I think that virtual networking is indeed likely to be the foundation of a credible NaaS, but you can do virtual networking without usage pricing, which is still a feature most think is the primary feature of NaaS. We could tap off the security argument for NaaS simply by using SD-WAN and/or broader virtual-network features.

Not all SD-WAN implementations offer connection control, because not all are aware of user-to-application sessions and capable of deciding what’s allowed and what isn’t through policy management. Of over 40 offerings in SD-WAN, only about five have any features in that area. Of my 112 enterprises, only 32 could identify even one of those, which means that the connection control features of SD-WAN aren’t widely recognized. Going to the broader virtual-network space, 88 enterprises could identify an actual provider of virtual networks (VMware’s NSX is the most-recognized) but only 15 could identify any other option, and only 9 said they used virtual networks for connection control and segmentation outside the data center.

A general virtual-network technology, which is available from VMware but also from network vendors like Cisco, Juniper, and Nokia, is capable of what’s needed for security-driven connection control, but hardly anyone knows. SD-WAN is generally not capable of doing that, but few enterprises know specific implementations that can offer it. One thing this argues for, IMHO, is a unification of the virtual-network space, a combining of SD-WAN and broader (often data-center-multi-tenant-centric) virtual network models.

My enterprise contacts weren’t spontaneously committed to a model like that. Of the group of enterprises who saw connection control and virtual networks as linked (9 in the real virtual network group and another 8 in the SD-WAN group, for a total of 17), they saw virtual networks creating closed application or user-role communities to which both workers and applications/databases could be assigned. That’s at least close to the model Juniper supports with the SD-WAN stuff it acquired with 128 Technology.

That would then raise the question of whether such a super-virtual-network model is the real service opportunity, and thus whether “NaaS” has any incremental value. I think it does, or could, if we take a different slant on what “as-a-service” means. Rather than linking it with usage pricing, look at it as connectivity as a service. That would mean that the connection model would be more dynamic than would typically be the case with a virtual-network strategy alone.

Dynamic connection management could mean a lot of things, and be implemented in a lot of ways. You could envision “communities” to which both users and applications could bind, but rather than the binding being largely static, it might be controlled by policies set at the community level, and might even allow users with certain credentials to join and leave communities at will. You could envision policies that would even look at the membership, and expel people or applications if certain users/applications joined. You could even envision a community policy to disconnect everyone, in situations like a security problem.

The interesting thing is that while users would be looking for usage pricing of services to be useful only if it saves them money, all of my enterprises thought that this NaaS model would be useful in security and that they’d pay for it if it at least contained overall security costs. More than half would accept a higher cost including security, if security were “significantly” enhanced. Thus, dodging the usage pricing issue might actually give network operators a path to revenue growth out of NaaS. But this would mean some serious work marketing features, something operators are notoriously bad at. Perhaps it’s time for them to learn to do it better.

Cisco Comes out of “Follower” into “Fast”

Let’s start by looking what we’d like to learn from the Cisco earnings call held Wednesday. Yes, it’s nice to know how Cisco did, particularly relative to competitors like Juniper. Yes, it’s nice to know how they characterize their quarter and what guidance they offer. What’s nicer is relating the Cisco information to the conditions in the network market, and to do that we have to factor in the Two Great Truths about Cisco as a network competitor. The first is that Cisco is a sales monster, a company who knows how to exploit account control. The second is that Cisco is an admitted “fast follower” in terms of tech innovation, not a leader. Those are the things we’ll come back to in our analysis of their results.

OK, now to the details. Cisco exceeded guidance and expectations in both earnings and revenues, and issued better-than-expected guidance. Revenue was up 7%, which is impressive in what’s surely still a difficult macro environment. Cash flow was at a record high, and recurring revenues accounted for 44% of total revenues, which shows Cisco is managing the transition to subscriptions well. Software subscription revenue was up 15% in fact. All the market segments did well except the service provider space, which was off (Cisco says) because the providers were taking time to absorb the pent-up deliveries generated because of easing supply chains.

What was interesting about Cisco’s earnings call was that they were more “futuristic” than usual. Cisco’s calls have always been replete with stories about “execution”, meaning that they were really about sales effectiveness more than product suitability. On their most recent call, they talked about “web-scale cloud infrastructure”, “hybrid cloud”, and “IoT”. Combine that with their cited analyst forecasts that IT spending will increase mid-to-high single digits in 2023 and you have the foundation for aggressive guidance, which is what Cisco offered.

They also made some specific comments in the area of management, which they were careful to characterize as “cloud management” to make it inclusive of hosting platforms. Cloud-native full-stack visibility via ThousandEyes and AppDynamics, and they promised to bring AI/ML into the management story at a faster pace.

What I’m seeing here is those Two Great Truths playing in a different way. On the one hand, there is no question that Cisco’s sales prowess showed in the quarter. Over the last three months, enterprises have been telling me that a vendor who has account control and, in particular, intense sales presence in their companies has a better chance of getting more of the assigned budget and a better chance of getting the budget increases that would benefit them. Cisco played those cards very well, and that’s the big reason for their success in the quarter.

The second “fast-follower” truth is also playing out. Juniper, arguably Cisco’s arch-rival in network equipment, has been consistently better at Cisco at innovating. Their management strategy is better because of Mist. Their SD-WAN and virtual networking strategy is better because of 128 Technology. They had a decent quarter largely because of these technology innovations. Being a fast follower means not pushing a technology innovation until a competitor proves it’s really beneficial, then doing an aggressive run at the space to own it. That’s what I think Cisco’s call is signaling. It’s time to emphasize the “fast” piece of “fast follower” versus the “follower” piece. They’re coming after Juniper’s differentiators.

This is a smart play for Cisco, not only because it’s consistent with our second great truth, but because it plays off the first. If your customer is looking at a competitor because they’re offering something innovative, you can step in at the sales level if you have account control. Nobody in the space, Juniper included, can play account control like Cisco can, and that means that competitors like Juniper have to rely on something else to level the playing field. That something else is marketing.

Marketing is the invisible, ubiquitous, powerful, virtual salesperson. It can develop demand, it can frame the competitive landscape that will define the features that are important to the buyer, and it can grease the skids of financial approvals. In short, it can do much of what sales account control can do, and it can do a few things in the early part of the opportunity-creating process that sales can’t address. Highly effective marketing can counter Cisco’s sales effectiveness, particularly if you combine it with technology innovation.

Juniper’s challenge, which I’ve blogged about for several years, is that they’ve underplayed their product assets. In their last earnings call, they talked more effectively about their technology innovations than they had in prior calls, and frankly more effectively than they talk about them on their own website. That’s let their own sales initiatives test the waters on the issues like cloud and AI and SD-WAN, without seizing ownership of the key features of the things they’ve innovated with.

You cannot out-sell Cisco, period. If you emphasize sales as your means of defending the issues that drive purchasing, then you surrender the field to the acknowledged sales elephant, which is Cisco. You can defeat that elephant only through incredibly aggressive marketing. It’s no accident that Cisco’s earnings call seems almost a counterpoint to Juniper’s call, a positioning of Cisco as the real power behind the innovative spaces that Juniper has exposed to the market. They are coming, as I’ve said, and coming hard.

The answer to that? The only one is that incredibly aggressive marketing. Cisco is signaling that it’s going to follow, perhaps only implicitly, their historical approach of the “five-phase-strategy”. A competitor comes up with something good. Cisco announces a five-phase plan that makes that something into a piece of the grand and glorious whole, a super-concept that Cisco at the time of their announcement is already in phase two of. The only defense against that approach is to define the features of the space, create the columns of the product comparisons, before Cisco can do that.

Cisco’s call is a clear sign that it, and the market overall, are entering a new phase in networking. Cisco is saying that they recognize that the feature drivers of networking are changing and that they now have to demonstrate their competence, nay their leadership, in those new spaces. If they succeed, they’ll take a leadership position in that new-model networking and their sales account control will keep them in that position. If they don’t, if Juniper or someone else defines all those feature points, then Cisco’s upcoming quarterly results may be harder to sing about.

The Hype and Hope of Open RAN

Is “Open RAN” in 5G something we should welcome, or be afraid of? Is it all a part of the 5G hype, is it its own hype category, or is it not hype at all? What’s its relevance to telecom infrastructure overall? All of these are important questions, so let’s try to answer them.

“Open RAN” is generally applied to initiatives aimed at creating open-model 5G RAN technology, but you may hear more about the O-RAN Alliance, the dominant play in the space. The goal of the initiative is to create the broadest possible open specification for RAN elements so that a complete implementation can be defined. 3GPP RAN specs leave some components of the RAN (technically, 5G New Radio) opaque and Open RAN defines specifications and decomposition models for these. This permits best-of-breed element selection, and also broadens potential market entrants.

The latter is arguably the primary goal of Open RAN. Without it, the major mobile infrastructure vendors (Ericsson, Huawei, and Nokia) would likely lock up deals. That’s because those opaque components of the 3GPP spec would be implemented in different ways by smaller vendors and that would risk lock-in to a small and unfamiliar player. Needless to say, the major mobile infrastructure vendors are of two minds about this. On one hand, buyers fearing vendor lock-in could have their fears mitigated by Open RAN conformance. That could accelerate adoption. On the other hand, admitting others into the market isn’t exactly a smart competitive move for the three giants. A recent Light Reading story quoting Ericsson suggests that by the end of the decade, Open RAN could account for a fifth of the RAN market sales.

This answers our first question. If you’re one of the Big Three of RAN, Open RAN is a mixed blessing, but I think most would privately agree that they’re negative on the concept, forced to support it by competitive pressure. If you’re anyone else in either the operator or mobile infrastructure world, it’s a blessing. But what about its hype status?

5G RAN is already widely deploying, but if Ericsson is correct regarding the impact, then it will have a minimal impact on 5G in the near term. And is Ericsson correct? My own model suggests that the peak penetration of Open RAN depends on what you mean by it. Both Nokia and Ericsson have committed to Open RAN or convergence of their products with the spec, so if that’s counted as an Open RAN deployment, then I think we’ll see Open RAN hit 20% penetration some time in 2025.

But remember the idea was to create a truly best-of-breed model for RAN. If we assume that Open RAN penetration means the number of RAN implementations that actually take advantage of the Open RAN model to support multiple vendors, then I think it’s doubtful that we’ll ever hit that 20%. My model isn’t accurate as far out as 2030, but it seems to be plateauing at about 17% and wouldn’t even approach that until 2028.

But even if Open RAN has significant penetration, can we say it’s not hyped? As it happens, the issues with Open RAN hype may well be connected with the issue of 5G hype. It’s not that 5G is being exaggerated in deployment terms; it already dominates all the major markets. The problem is that 5G is usually characterized as a major new source of operator revenues, and since I doubt that it will be, the claim qualifies as hype in my book. I’ve blogged plenty on that, so feel free to look back if you want my reasons. So what would, could, should the role of Open RAN be in those new service revenue opportunities? That’s what decides whether Open RAN is hyped.

I’ve rejected the notion that just having 5G Core with network slicing was going to have significant impact on the mobile market. If nothing else can have such an impact, then you could argue that Open RAN isn’t very relevant except to mobile-infrastructure geeks. What would create a non-slice impact? Edge computing.

The big innovation in 5G, from an infrastructure and openness perspective, is the use of hosted elements rather than a fixed set of static appliances. Obviously hosted elements need hosts to run on, so presumably any 5G implementation would promote hosting. Open RAN defines more elements that rely on hosting, so it would promote more hosts. OK, that’s true, but there’s big “however”. Ericsson’s point in the article is that Open RAN is in fact effectively edge/cloud RAN, and that this model raises serious questions about the handling of the data plane.

Another 5G innovation is the separation of the control and “user” planes (CUPS). Functionally, the 5G UP is very much like an IP network, but it has a collateral role in CUPS, because there’s a link between the RAN implementation and the Core; think I-UPF (in MEC) and UPF (C-RAN) and also between the slice management (which is 5G specific) and the Core. Mobility management and slice management impact the UPF flow, which means that some UPF features would be “hosted”. That implies that the data plane of a mobile network would be hosted. You can implement a router as a hosted software instance, but it’s not likely to be the fastest and best option, which was Ericsson’s argument.

Solution-wise, the right answer would be to have a “router” that was a real, optimal, IP data device in all respects, but that would support some means of offering the things that mobile infrastructure needs, which is the GPRS tunneling and its control. Router plus 5G UPF-specific features equals 5G UPF.

The next-best approach would be to host the UPF features on an edge/core pool of servers with specialized chips to optimize packet movement. Intel’s x86 model is far from the only game in town even today; the article cites ARM, Marvell, Nvidia and Qualcomm as examples of other chips in use, and Broadcom offers its chips for white-box routers (DriveNets uses them) so they’d clearly be suitable. However, the use of a specialized resource pool could compromise the value of 5G as a means of driving early edge resource growth. Unless the edge applications needed the same special data-plane expediting or at least had another use for the special chips, the chip enhancements might make the edge resource pool too expensive for general use.

The solution to the general-edge-resource-pool problem is to use general x86 chips. As the article pointed out, Intel has taken the position that competition in the general-purpose computing chip space is high, and economies of scale in production are good. The former means that performance of these chips is likely to improve, and the latter that chip costs will be as low as they’re likely to be under any option. If we assumed that Open RAN penetration rates were modest over the decade, then we could assume that by the time there was a lot of interest in deploying hosted UPFs, the x86, AMD, and ARM options would likely work in most UPF missions.

You can see the problem here. Since Open RAN hasn’t taken off as fast as many had expected (or hoped), it’s not advanced edge computing much at all, and if data-plane performance encourages implementations that require special silicon, then it won’t promote a pool of general edge resources. Not only that, the requirement might well dilute the benefits of an open specification, since only a specialized implementation would be competitive with the Big Three vendors.

I’m a big supporter of open standards, and Open RAN, but I’m starting to wonder whether the market is outrunning the value proposition here. It may be necessary for the O-RAN Alliance to start looking at the specific question of suitable hardware for the UPF elements if we’re going to see Open RAN deliver what everyone hoped it would, which was an on-ramp to edge computing.

Is AI an Extension of Automation? If So, that Could be Bad

I’ve blogged multiple times on AI and ChatGPT, so you may be bored with the topic. Bear with me, because this time I’m going to look at what the real, long-term, threat of the technology could be. It may not be as simply scary as it’s popularly portrayed, but in the long term it could perpetuate and even cap a negative shift that information technology and “automation” started.

Think back to the late 1700s. Many of the really good jobs required considerable strength; blacksmiths, stone masons, and so forth. Others required special skills and dexterity, like cabinetmakers. Then along came machines, and suddenly almost anyone could produce the kinds of goods that specialized, high-strength, high-skilled, workers would have produced, and in much greater quantity. The industrial revolution was populizing in two dimensions. It made almost everyone a potential producer, and it made goods in such quantities that almost anyone could afford them.

Now think forward to the 1960s. Computer systems were still fairly rare, but many large companies had them. There was an elite of “programmers” who were suddenly high-value employees because they could tame these new monsters and bend them to the company will. It didn’t change everyone’s life, but it did create a new kind of elite. If you really, really, understood how to use computers, you were almost literally in a class by yourself. One big insurance company found that about one person in fifty could be taught to program well enough to be commercially useful.

Since that early computer age, we’ve been making computers faster, cheaper, more effective. A personal computer is now within reach for many, but not necessarily within their grasp. If you really know computers well, a PC empowers you to do things that someone without your skill could do only with considerably more time invested, if at all. Jobs that benefit from computer literacy turn out to be the best-paying because of the productivity multiplier computers offer. Computers also started running machines themselves, the first of what we think of as “robotics”. Automation, meaning the conversion of human tasks into tasks a computer-driven machine can do, started to put the kinds of people that the industrial revolution empowered out of work. And that is the risk of AI.

I do not believe that something like ChatGPT could, even in five years, equal a well-qualified expert in a given field. Remember my two friends, one in financial and other in tech media, who said essentially that they believe ChatGPT could do as well as some subordinates they knew? I agree, but not as well as my friends could do. AI is at the low end of a human capability scale in terms of answering questions. One could argue, as my friends seemed to be doing implicitly, that it was “average”. Well, so are most of us, by definition, and what happens if a lot of those “average” people in media or finance were replaced by AI? How do they earn a living?

I grew up in the Pittsburgh area, and when I was back there twenty years ago you could find both the rusting hulks of big manufacturing and mill operations, and also find some of the workers who’d been employed there. They had great jobs for their time, and I talked with some of the sons of those workers who expected to have the same kind of jobs when they grew up. They didn’t. Manufacturing has been automated and off-shored. A lot of the “blue collar” workers who made the US a powerhouse in the space have been automated away too.

Office workers, “white collar” workers, weren’t threatened…until AI came along. If ChatGPT can write a story on tech, will somebody start a media company that relies on that and doesn’t bother employing reporters? If ChatGPT can write basic financial analysis, do all the low-level positions in the financial hubs of the world get replaced by a browser window? Automation didn’t come for the white-collar types, but AI is coming for many of them.

We talk about job categories being lost as though we were talking about parking spaces. If there’s a limited number in one lot, you move on to another. Consider, though, what happens if AI can replace all those “average” people? Say half the total population, maybe more. Where’s the next parking lot that could handle all those looking for space? How does an economy, a politic, handle a situation where such a big chunk of the worker population can be replaced by AI? The industrial revolution was populizing. It made more people participants in the economy. The tech revolution is depopulizing; it’s making more people redundant.

I’m not qualified to talk about the morality of something like this; I’m sure others will give it a shot. I think I am qualified to say that a society cannot survive wholesale economic disenfranchising of a big chunk of its population.

Back when I was a junior programmer, the company that trained and employed me used to give programmers a special “any-time” lunch pass so we never had to wait in line to eat in the cafeteria. I used to see the look in the eye of those in line for the next time as I passed them to eat when I wanted. It wasn’t resentment, it was envy, because at that time most people thought that computers would level the playing field further, create the “second industrial revolution” with the same positives as the first. I wonder what those people would be thinking today, when they realize that the people going to the head of the line were working on a technology that could replace them completely?

A more relevant question comes to mind. Suppose that I’d been working on that super-AI concept? A concept that could have likely displaced many, most, or perhaps even all those people in the line for lunch? It would have spared me any hostile glares, but if we assume that other programmers in other companies were working on the same thing, then it raises a very important question. How would those displaced people get the money to buy insurance, or the products or services of those other companies? Without the products and services, could my insurance company employer have paid for the compute power needed for AI, and have paid me and my programming peers? A consumer isn’t just a body, its wages and spending form the basic element of the economy…except when it doesn’t, in which case it’s not a consumer any longer.

Most of the stories about the impact of ChatGPT and AI are stupid, click-bait stupid. Many who read it will never live to see the kind of impact that the stories predict, and some of those impacts will never happen. The real risk, the risk of creating mass economic redundancy in a world where class differences are already causing political polarization and warfare, is a long way off. But so are potential answers, resolutions. We need to create solutions at least as fast as we’re creating the problems, and I’m not convinced that’s happening.

Could Juniper be On the Verge of a Seismic Positioning Change?

Over the last two years, I’ve been pretty clear regarding the importance of virtual networking. It’s not only that the use of the cloud, and the expanded use of virtualization and containers in the data center, demands it. SD-WAN is a form of virtual networking that’s exploding as a means of connecting thin sites and cloud components to the corporate VPN. Finally, it’s inevitable that the Internet becomes the dominant means of connecting both consumers and businesses, and the latter application demands virtual networking for traffic segmentation. It also promotes Network-as-a-Service (NaaS) in its broadest sense, which further exploits virtual networking and may well link SD-WAN with the other virtual-networking models.

I’ve also been clear that I believe Juniper has the strongest product line of all the vendors in the virtual networking space. They have their Contrail virtual-network technology, they acquired what I believe to be the premier SD-WAN tool with 128 Technology, and Mist AI gives them a foundation for automation of network operations, something critical in the highly dynamic and essentially fuzzy world of the virtual network. But I’ve also been critical of Juniper’s positioning of their assets in the virtual network space. Innovative products require innovative messaging, and recently there have been some signs that Juniper is getting more serious about singing their virtual-networks song.

Containers have become the go-to application hosting model for most enterprises, both within the data center and in the cloud. Kubernetes is the preferred container orchestration tool, but even large enterprises have concerns about adopting open-source software in its native form, compiling the source into loadable units, and maintaining all the other elements of a container ecosystem. OpenShift from Red Hat is a supported container platform that includes orchestration, and one of the elements of the platform is the Container Network Interface, a plugin that provides containers with virtual-network capability. Juniper had a Kubernetes plugin for Contrail, and now it’s released CN2, which is a virtual-network SDN/CNI strategy for OpenShift.

Juniper has taken steps to integrate CN2 tightly with OpenShift to make it easier to install, and easier to adopt for companies that already have a CNI strategy (which, realistically, is most OpenShift users). This is particularly valuable for companies who are looking to integrate their data center virtual networks with their cloud networks in a hybrid cloud configuration. Given that IBM (Red Hat is an IBM company) is one of the leaders in promoting hybrid cloud strategies for enterprises, this gives Juniper a hook into what might well be the most critical applications of virtual networking, and thus of networking overall.

Juniper also announced its “SD-Branch” strategy, which combines Mist AI management (via the Juniper Mist Cloud) with the Session Smart Routing Juniper acquired with 128 Technology. In this announcement they’ve not only tied in AI, but also the assertion that the explicit connectivity control offered by Session Smart Routing is a key piece of zero-trust security and enhances security overall. Other Juniper security tools can be layered on to this framework. Finally, the SD-Branch bundle includes Marvis Virtual Network Assistant, a tool to provide easy operations support for complex multi-technology networks that include SD-WAN.

These two virtual-network-related announcements were blogged about on the same day, which certainly raises the question of whether Juniper might be looking at creating explicit integration between the two. Such a move would do a lot to unify Juniper’s own story, and also to address the fact that a business network these days is increasingly a composite of the Internet, a VPN, the data center network, and the cloud network. Could Juniper be planning to unite its virtual-network families, and by doing so create a truly universal virtual-network model?

In my earlier blog on NaaS, I pointed out that NaaS couldn’t be simply a usage-priced VPN, in part because the Internet and the cloud foreclosed such a simple vision. CN2 and SD-Branch could be combined to create a true, highly functional, NaaS framework that would embrace current VPN commitments (and even true private networks) and at the same time be suitable for the inevitable shift toward using the Internet broadly for cloud and business connections. Not only that, I think that those capabilities are already in place so it’s just a matter of collecting them into a kind of super-product. Since Juniper is unique in the features it already has, this would give Juniper a major lead in the NaaS space, and I think that “true NaaS” is where networking overall is heading.

The same moves could give some life to Juniper’s Cloud Metro story, which is IMHO the most critical transformation of our time in network infrastructure. As I’ve said in many blogs (in many ways, too!), the metro piece of the WAN is the most critical piece in terms of infrastructure evolution. It’s close enough to the edge to allow for personalization, which makes it a critical place to inject service features. It’s close enough to be reached with low-latency connectivity, which means that all real-time applications that evolve out of IoT, the metaverse, or a combination of the two will likely have to be hosted there. It’s also, as the logical edge of the core network, the place where any optically intensive core transformation will have to take place. In short, if the Internet is really going to become universal, it’s likely the metro deployments that will launch any technical changes designed to facilitate that universality.

You could argue that yet another Juniper blog, this one on Apstra’s importance as a data center automation platform based on intent modeling, is an important metro step. Metro isn’t just networks, it’s also hosting, which means that it’s an edge data center. Could Juniper be looking at Apstra as an element in metro? The Apstra blog was five days before the CN2 and SD-Branch blogs.

Those who have followed my views on Juniper, NaaS, and metro over the last couple years will likely realize that what I’ve said they could do here hasn’t changed much in that time. What may be changing is Juniper’s intent, with the emphasis the key factor here. We’re on the verge of the biggest service transformation in the history of networking. Therefore, we’re on the verge of the biggest network infrastructure transformation, and the biggest transformation in the stuff that builds the infrastructure. Could Juniper’s announcements be an indicator that it’s really going after this Great Transformation? If so, and if they succeed, then we could be in for a very interesting 2023.

Private 5G: Now Dead or Never Alive?

It’s already getting hard to find positive things being said about 5G, and candidly most of the negativity is well-deserved. Hype doesn’t conquer all; although it does produce more interesting stories, reality eventually impinges. At any rate, Light Reading offered another round in the 5G downside fight, with the story on how the shine is wearing off private 5G. Again being candid, I have to wonder if there was ever any real shine at all.

This particular story seemed to stem from the comments that Verizon made during its January earnings call, where the CFO admitted that “A couple of areas where we are behind versus our expectation [are] the mobile edge compute and 5G private networks”. I don’t think Verizon was alone in not seeing private 5G meet expectations, but (returning to candor) I wonder how many of those who expressed lofty private 5G hopes had their fingers crossed behind their backs.

Back in 2021, which happens to be the year when Verizon executives were claiming a TAM of between $7 billion and $8 billion for private 5G, enterprises were telling me that they didn’t see any more value in that technology than they’d seen in private LTE. The article quotes AT&T as pointing out the fact that private LTE was similarly hyped and similarly disappointed. In fact, more enterprises said that advances in WiFi made private 5G less useful than said it was useful at all. Still, operators, mobile network vendors, and even cloud providers were eager to jump on the technology as a promising revenue source for themselves, and a heady benefit set for enterprise users. Why?

I can’t get on-the-record comments from vendors on things like private 5G, and few comments even off-the-record. However, I did get the sense that a lot of vendors and providers who had been seeing 5G as creating new revenue opportunities were engaged in the common practice of dodging negative proofs. You see a given technology as a way to get positive media coverage, and you dive into it. Maybe some believe it, but probably most in their hearts have doubts. Then, as the proof that the positive views were unjustified, you jump to a related area that’s not yet disproved.

OK, you all know how I feel about this sort of hype, but there is perhaps a question to explore in this flood of exaggeration and disappointment. How many is the “some” who actually believe this stuff? Is there self-delusion going on here, or is there simply manipulation? It turns out that there are both.

Over the last decade, I asked marketing professionals whether they could be more effective pushing something they believed in versus something they doubted. The numbers varied a bit, but generally about three times as many said they were better at marketing things they believed. Most who told me that said that when they believed something they were “more convincing”, and those who could say why that was suggested that people were willing to get more aggressive pushing something they thought was true, because proof to the contrary wouldn’t blow back on them.

The same group of professionals also admitted (by about the same margin) that people who at least appeared to believe strongly in some new technology were more likely to get meaningful roles in marketing it. That suggests that the ability to appear to love something gets you promoted if that something becomes a focus. That combines with the other factors to create organizations who are likely to be cheerleaders for things, whether they really believe them or not. Given that hype gets you editorial mentions and potential sales engagement, that’s probably enough to promote some hype-targeted campaigns, even products.

In the case of 5G, though, the big reason for dodging negative proofs seems to be that everyone really did believe 5G was a revolution. Why not? It had such broad support. Yes, the media led the cheers, but operators, vendors, analyst firms, and pretty much everyone else validated 5G. It was budgeted after all, at a time when having specific funds available meant a lot. How could something budgeted fail? It couldn’t, and it didn’t, of course. It was just that the model of “success” wasn’t exactly what was expected. 5G turned out to have very little user-visible value beyond LTE. When operators started to pull back on 5G spending, vendors started to look to non-operator buyers to justify what they’d already done, and private 5G got its start.

What, exactly, is private 5G supposed to do? Clearly an enterprise couldn’t deploy a private 5G network as an alternative to public mobile services. Clearly, within a facility, it’s very difficult to see why private 5G would offer much versus WiFi, which is far less costly and more versatile. OK, maybe a campus location could use private 5G where WiFi roaming might be complicated, but wouldn’t that same location do better with public mobile services, including 5G? And after years of standards work, we still don’t have any decisive features to justify private 5G. In fact, it’s hard to find features to differentiate public-network 5G services.

The root problem here wasn’t 5G, or mobile networks, or standards processes, though. It was bottom-up or supply-side thinking. The Field of Dreams mindset leads you to the promised land if “they” actually “come”. If that happy result isn’t a pure accident, if “planning for serendipity” isn’t a great idea, then you have to assume that the question of who will come and why has been asked before the field has been sown. You also have to assume that the value proposition becomes clear before you start expecting a lot of traffic.

This may explain why ETSI just announced the adoption of “Software Development Groups” or SDGs. These, the organization says, will “accelerate the standardization process, providing faster feedback loops and improving the quality of standards.” I think that there’s little question that SDGs could translate standards into something actually available faster than just writing specifications would do, but that doesn’t resolve the private 5G problem or other problems of the same type.

Suppose 5G had advanced at a breathtaking pace instead of a glacial one? Would that have meant that real applications that could benefit from 5G, raise operator revenues, add business benefits, would have emerged faster? Why? Just having an SDG to build code doesn’t mean that the basic mission would somehow change, it could just mean that the failings of the process might be recognized sooner. When we had ample numbers of providers of private 5G technology, did users suddenly jump on it? Nope, so why would they have done so had software been available three or four years earlier?

If you’re asking the wrong question, no amount of efficiency in getting your response is going to overcome the fact that it was the wrong question. You can still plant hopeful corn rather than asking what would motivate people to travel to your field. Software design can start wrong and stay wrong, just as easily as formal standards can.

ETSI’s SDGs can “improve the quality of standards” only by making standards into something that responds to market needs and opportunities. If you subordinate SDGs to standards processes, that’s not going to happen. ETSI has to accept that the problem with standards is the way they’re conceptualized, that needs and opportunities have to be identified first, validated, and only then addressed with standards and software. That this would mean a likely difficult transition from supply-side thinking is obvious, and that the transition could have been made a decade or more ago and was not is also obvious. That demonstrates that issuing a press release isn’t likely to create the transition now. ETSI has to make explicit changes to align initiatives with realistic opportunities. If they do, they might be able to save 6G.