Can We Build a Metaverse Model as a Collective Community Effort?

Metaverse-wise, I’ve got good news and bad news, and they’re actually the same news. We seem to have gotten some momentum up for an open metaverse concept, and there’s both a Linux Foundation and World Economic Forum framework being put into place. That’s good because we need an open metaverse, but bad because collective processes like these tend to advance glacially, and because I don’t see clear signs that either initiative has really grasped the essential piece of the metaverse concept.

The WEF initiative is part of their “Global Collaboration Village”, which it says is “the first global, purpose-driven metaverse platform to enhance sustained multi-stakeholder cooperation and action at scale”. While the metaverse focus is explicit, the details suggest that what’s really being established is an ad hoc group that will be working to create an open metaverse model. That, to me, suggests that there’s little chance of near-term results being created.

The press release I’ve cited suggests that what’s really happening here is the creation of an “extension of the World Economic Forum’s public private platforms and in-person meetings and will provide a more open, more sustained and more comprehensive process for coming together”. In other words, this is an application of the metaverse concept to the specific mission of collaborative support. That’s a valid application, but it’s not the foundation of what we need, which is a general model of metaverse-as-a-platform.

Not to mention the question of how open the concept will be. Accenture and Microsoft are the partners of the WEF in building this, and both these companies are tech giants who have their own profit motives to consider. One has to wonder in particular whether Microsoft Teams, which is a key collaborative platform already, might either be integrated into the approach, or whether the metaverse collaboration might end up as a Teams enhancement.

To me, this particular initiative is perhaps an early example of “metaverse-washing”. Collaboration isn’t the metaverse, just a small application that in fact is largely related to the “social-metaverse” concept that frankly I doubt has much near-term utility. There are just too many pieces needed for it, and those pieces are somewhat disconnected from the more general metaverse platform requirements. This will likely raise metaverse visibility but not advance it in a meaningful way.

The Linux Foundation initiative, called the Open Metaverse Foundation, addresses at least the openness requirement. The OMF is “home to an open, vendor-neutral community dedicated to creating open standards and software to support the open, global, scalable Metaverse.” There are eight Foundational Interest Groups (FIGs) defined at present, and while all of this is (if anything) more in the embryonic than even the infancy stage, they seem to be more related to requirements-gathering than to architecture creation.

That raises the big question that any metaverse collective is going to have to address, explicitly or implicitly. Have we accepted the notion that the “social metaverse” model, meaning a virtual reality space in which users “live” and “work” is the sole objective for the metaverse? It seems to me that question has to be addressed explicitly, and neither of these initiatives really seem to be doing that, though both seem to come down on the “Yes, that’s the metaverse” position with respect to the virtual reality model. I think that’s a big mistake, or at least a big and risky assumption.

“Virtual reality” is fine as a metaverse foundation, if we relax what we mean by “reality”. To me, a metaverse first and foremost is a digital twin, a virtual, software-managed, model of some real-world system. There is no need for a metaverse to represent humans at all, and surely no need for the virtual reality to be user/human-centric. A virtual world and a virtual reality are different; the former is a subset of the latter despite the more inclusive-sounding name. A factory assembly line is a virtual reality, and so is a warehouse, a building or home, a road or intersection, a conference room or a totally artificial structure, city, or even world.

Architecturally, then, a metaverse is a model that is synchronized in some way with elements of the real world. Those Elements (let’s call them that for now) are “objects” in the traditional sense of software objects. They’re also “intent models” or “black boxes”, which means that they assert a set of properties that allow them to be manipulated and visualized, but their contents are opaque from the outside. Thus, an Element might be a singular something tied to sensor/control technology, or it might be a digital-twin metaverse in itself. The OMF metaverse foundational interest groups include the “Digital Assets” group, but rather than focusing on things like animation as the example on the OMF website does, the group should be working on the metadata and APIs that represent the properties and interfaces that are exposed.

The reason I’m raising this point is that it’s fundamental to preventing the problem that seems to plague, and eventually stymie, all these collective efforts at open standards. You have to constrain the problem or it’s unsolvable. The platform architecture of the metaverse, the presumption that it’s a model populated by digital twinning and manipulated through a set of APIs and parameters/metadata, is a constraint. Accept it explicitly and you can then populate those FIGs with meaningful goals that add up to the creation of a generalized model. Constrain architecture properly and you can un-constrain mission and application.

The opposite is also true, of course. If we don’t constrain architecture properly, meaning base the notion of a metaverse platform on the most general vision of “virtual reality” which is the digital twin concept, then we are going to end up constraining the mission set because everyone’s view of what “reality” is will color their contribution and create, in some way, a consensus direction.

If you look at the Discord group dedicated to the OMF, you see what I think is an example of this. There are gamers, programmers, etc. but the dominant interest is VR, which means a social-metaverse slant. I see that in the FIG selection too. Given that, there is a visible risk that what we could classify as “assumption inertia” will lead us to a metaverse model that’s not going to cover the real scope of metaverse applications. That’s not as bad as an explicit turn in the wrong direction, but it’s still bad if our goal is an optimum metaverse future.

Orange is Working Hard to Transform Profits, but Will it Work?

I doubt that there are many who believe things are coming up roses for the network operators globally. The challenges with declining profit per bit have been recounted by just about everyone, including operators themselves. Wall Street is getting antsy regarding the sector, and a growing number of analysts are expecting some major changes.

Like many others, I’ve been saying that the basic problem operators face is that their connection services set is inherently a featureless commodity that users will take for granted unless there’s something to complain about. Operators have no pricing power, no differentiation at the service level, and every initiative aimed at somehow rehabilitating basic connection services with things like “bandwidth on demand” have failed. It seems inescapable that they need to climb the value chain…and yet….

…here’s Orange, who has in fact done that with things like content creation (OCS) and business services (OBS), and is now being criticized for the very moves people like me have promoted. A French publication characterized those units as “burdens” on the business, and past executives who are seen as being instrumental in the creation of these above-the-network units have been defending their decisions. Next month, Orange is expected to announce plans to either stimulate the units they believe are underperforming or cut them back, even out. I noted earlier this month that Orange has already decided to sell off its content business.

The article I cite notes that Orange is expected to take a “back to basics” position, focusing more on connection services rather than on something higher-layer and potentially with greater profit potential. The big question will be less what they say than how they expect to make what’s almost a “recover past glory” theme actually recover it. Like all EU operators, Orange has some fundamental plusses and minuses.

I’ve noted before that what I call “demand density”, the economic potential of a market to produce communications revenue, is a critical factor in wireline ROI. France has a demand density that’s roughly the same as Spain’s or that of the US overall, which is far lower than that of Germany or the UK. That means that unlike more dense markets, infrastructure in France isn’t naturally profitable. Another measure is “access efficiency”, which is a measure of how efficient broadband modernization would be based on density maps, rights of way, and other factors. France is again lagging Germany and the UK there, and roughly matching the US overall (or AT&T in particular). This means that broadband improvements would be harder there than in more dense nearby markets.

Mobile services in the EU are very competitive, and of course operators started shifting their profit-production focus to mobile decades ago. Today, mobile services have had no better chance to sustain pricing power and profit potential than wireline, for France and for the balance of the EU and UK. Again, the situation is comparable to that of the US. As is the case with wireline, there is no likely service kicker at the connection or basic services level that could change that.

That raises the key point, and the key question, Orange raises. You either have to make basic services more profitable or you have to make a go of services that because they are profitable necessarily extend beyond the basic. That’s all the choices there are. Orange tried the latter choice, and now it may be retreating to the first. But did they really try that second choice, and is there a first choice to retreat to?

We have content production, business services, and banking today. Is it smart for an operator to enter one of those spaces and face incumbents who know the space well? There’s a part of speech called the “synecdoche”, which in an expression substitutes an example of the whole, a member of the class, for the whole. Could Orange have inadvertently employed synecdoche here, thinking that a small (and sub-optimal) higher-level service example was a proxy for the whole space of higher-level services? Did they need to broaden their thinking, to go after spaces that were above connectivity but different from content or business services or banking?

I have to wonder whether there exists, in all the world, a more unlikely combination of service goal and goal-seeker than the goal of content production for a telco. Content is all about imagination, something that few would accuse telcos of possessing in any quantity. I think Orange had the right idea but the wrong service target set.

So can they go back to basics? Here my modeling suggests that they cannot do so easily. Telcos whose service area has a demand density and access efficiency at least three or four times that of the US can make connection services profitable with no special measures. Infrastructure passes enough opportunity dollars to make it work. Where demand densities are at least twice that of the US overall, and access efficiencies are likewise, I think that a rethinking of how networks are built, a shift to focus on creating capacity rather than on managing it, could also make basic services profitable. But France doesn’t have a demand density or access efficiency twice that of the US.

In the US, Verizon has a regional demand density that’s five or more times that of rival AT&T. AT&T’s density isn’t too far off that of France. That’s important because AT&T is working really hard to improve network costs and to define “facilitating services” that they could wholesale to OTTs to become a part of a higher-layer service. Verizon has been fairly quiet in both spaces, which I think is due to the fact that its territory is inherently more profitable.

Orange is one of the four operators who formed the “facilitating services” JV I described in yesterday’s blog. Obviously, they’re not just betting on a return to the past, but as I noted yesterday, their target service set is far from ideal, and any facilitating service option is going to take some time to socialize with the OTTs. In that time, there’s no meaningful revenue from the initiative, and Orange may hope that if improved basic services aren’t the whole, longer-term, answer to their profit needs, they’ll at least hold off the wolves till they figure out one.

Orange may not have the option to go back to basic connection services, and if it’s going to attempt that, it will have to do a fairly radical transformation of infrastructure and not just try to needle a few percent off capital cost of equipment. Opex transformation, once you’ve picked the low apples, necessarily relies on significant automation, and capex transformation has to look at the overall network model and not just at point-for-point substitution of cheaper gear for legacy boxes. It’s going to be interesting to watch Orange navigate all of this, because their situation is fairly typical. Other telcos need to pay particular attention, because most who aren’t in Orange’s shoes already will step into them before long.

Four European Telcos Form a JV, but For What?

Well, here’s a surprise. Four major European telcos (DT, Orange, Telefonica, and Vodafone) plan to create a joint venture that’s aimed at attempting to address their “disintermediation” and “profit-per-bit” problems. EU operators have already asked the EU to approve traffic subsidies for telcos by the OTT giants. Is this latest JV an indication that the telcos believe the EU won’t approve subsidies, that they believe the subsidies aren’t enough, or that they believe that there really are underlying technology issues that telcos need to address to be profitable?

The JV aims at creating a platform for digital advertising, based on the generation of a “pseudonymous” token that would be specific to a given OTT website. The token would identify the user without revealing personal information, and because there would be a unique token per website/OTT, it wouldn’t be possible to profile the user by combining cookie data. All of this seems directly linked to EU data protection and consumer protection requirements that are aimed at preserving the privacy of personal data.

There’s no question that user identification is a viable mission for a telco or telco JV. Knowing the relationship between a user (as a consumer) and the user’s connection to the Internet is surely the most reliable way of creating an online identity. Similarly, telcos are already charged with (and regulated for) responsible protection of consumer data. The concept, in fact, has already been in trial by DT and Vodafone in Germany, in mobile network applications. However, the article I referenced questions whether user concerns about privacy could derail the initiative. I doubt that, but I also think there are both positives and negatives about the JV’s ability to impact operator challenges.

Let’s first be hopeful. I’ve noted in past blogs that AT&T had raised the idea of operators offering “facilitating services”, which would be services that facilitated OTT behavior without actually creating a direct-to-consumer OTT offering. OTTs would then pay for that service set, and those payments would raise profit per bit. This initiative seems to be an example of such a facilitating service, likely the very first to be offered, and so it’s a potential prototype for further development in the space.

To be workable, a facilitating service has to have two primary properties. First, it has to have the potential for adoption by sufficient OTT mass to actually generate a useful revenue stream. Second, it has to be defensible against OTT competition from players who might elect to do the same thing on their own, or band together to do it collectively. The JV target might meet these goals.

Getting a piece of the ad dollars seems, in the surface, a reasonable way to address both these property requirements. Certainly there are enough ad-sponsored sites and services out there, and there seems little chance that the OTTs in the space would somehow be able to mount their own credible counter-strategy. But there are some “howevers” that we need to consider.

The first is the linkage between the telcos’ problem with differentiation and the JV approach. The great majority of traffic that’s threatening telco profit per bit comes from video content. Yes, much of the content includes advertising, but is the potential revenue from that source proportional to the traffic and operator cost associated with content delivery? Is the strategy also aimed at things like social networking, which may generate less traffic? A disconnect here would pose a risk that regulators would push back.

The second, and broader, question is whether something designed to protect user privacy can really facilitate anything at all for an OTT. Online ads are better than broadcast ads because of targeting, which presumes that the ads rely on knowing something about the user. While a token might anonymize the user in a personal sense, might it not have to reveal personal data without revealing identity? If that’s the case, what happens when the user has a relationship with the OTT (like a streaming subscription) that necessarily reveals identity? If personal information is really significantly more protected by the scheme, then ad targeting is hampered and the value of the JV would depend on the cost of sustaining another solution to privacy management, versus that the JV would offer.

The JV then may be one of those “not-the-beginning-of-the-end-but-the-end-of-the-beginning” things. It could demonstrate that telcos are willing to look hard at facilitating services, cooperate to provide them in a way that would be competitive given market drivers, and credible as sources. That’s a good sign IMHO because I think facilitating services are the key to improving profit per bit in the near term. But I also think this particular service is more a tactical response to regulatory policy evolution than a long-term revenue model.

Tactical responses are dangerous ground for telcos, whose inertial behavior is legendary. Only governments move slower, which is why the JV target could be reasonable in terms of speed of availability. It’s durability I’m concerned with. The biggest opportunities for facilitating services aren’t tactical.

What makes a facilitating service durable is a long-term barrier to competitive entry, and since “competitive entry” would be most likely to come from the OTT side of the game, it’s those OTTs we’d want to keep out. Having OTTs share a third-party (telco, in this case) facilitating service limits their differentiation, so there has to be enough of an offsetting barrier to their rolling their own service instead. It’s hard for me to see such a barrier outside a financial one.

Why could a telco get into a service an OTT could not? Because a telco, as an effective public utility, has a very low internal rate of return. Roughly speaking, the IRR is the ROI level below which investments will actively hurt a company’s financials. Because of their low IRRs, telcos can get into stuff that OTTs typically could not. Telcos could also likely fund positioning investment in infrastructure out of cash flow, which most OTTs would be reluctant to do.

All these plusses have to be balanced against the basic truth that the telcos can’t do facilitating service without a strong notion of what OTT activity they’re proposing to facilitate. If you look at the 5G space, you can find ample proof that rather than having their eyes on the OTT clouds of consumeristic demand and opportunity, telcos have their eyes on their feet. Somehow, they believe, saying just the right thing about stuff like 5G SA or network slicing is going to stimulate a whole market to seek out ways of exploiting that stuff, despite the fact that there’s no global scope for these capabilities and few OTT services that don’t demand global scope.

That raises the final issue, which is “feature federation”. A single telco like AT&T can surely create a facilitating service if they extend themselves, but would that single feature, sitting in the midst of a global feature desert, induce OTTs to rely on it? Even something like the proposed JV does nothing more than make the inadequate footprint of a facilitating service a little less inadequate, which is a long way from being adequate, credible, or compelling. How do you share features across operator boundaries the way we’ve shared connectivity in the past? There is no good answer to this today, and I think that telcos like those promoting the JV need to be thinking about working on something in that space. I know from past experience it won’t be easy and it’s likely to take a while. Meanwhile, OTTs may develop their own “facilitating services” and then it will be too late.

Marketing, Sales, Trajectories, and Success

What is the difference between “sales” and “marketing”? Why do incumbents have an advantage in deals? Are new technologies, particularly revolutionary ones, sold differently than established ones? Should they be? All of these questions relate back to something even more basic, which is “how does something actually get sold?”

That seems a pretty basic question to me, given that virtually every company relies on selling things to sustain their very existence. The problem is that while it may be a basic question, there’s a fairly astonishing number of companies that don’t seem to answer it, or even try to. Today, a lot of what we hope for from networking and IT depend on sales success, and we’re falling short in more cases every day.

Decades ago, I started to augment surveys of companies with modeling, and in order to model how a market was going to work, you need to understand those basic workings. Fortunately my survey base was happy to cooperate in creating what I called a “decision model”, a model that considered the factors that made up the progress from “suspect” to “customer”. I’ve updated it over time as buyer behavior changed, but many of the lessons of that early period are still valid today.

Most selling in tech tends to happen through formal bidding, and most products and services are introduced into a tech framework for which there exists an incumbent vendor who has a degree of account control. In some cases, the account is big enough to justify an on-site sales team. In those cases, traditional sales-centric processes are generally successful in moving goods and services.

Companies that have real account control, and particularly those with on-site sales resources, are usually able to promote technology advances more easily because they interact regularly with management in both IT and operating organizations. They don’t always use this capability, though, because of the risk that new technology will overhang current sales and introduce new competitors in a game the incumbent is already winning.

The problem arises when one or more of those conditions aren’t true, which is often the case for new technologies or radical approaches, and is also true if a non-incumbent wants to break into an account. In all these cases, it’s critical to be able to transition companies efficiently from being suspects to being customers. And it’s not easy.

A company is a “suspect” when they are sensitive to the possibility of a purchase. In this stage of their evolution, they aren’t committed to anything, only perhaps interested. Their attitudes, value propositions, opinions, and sensitivities are all over the place; think Brownian Movement. In this state, with all this variability, it is almost always fruitless to apply sales resources to the company because there’s no way of knowing how to make an effective approach, and because too much “education” would be required to prep for an actual order.

The sales process, I’ve found, can be compared to a funnel. At the wide end, you want it to intersect all those “suspects”, but you want the process to gradually order the chaos so that, at the narrow end, companies emerge in a state where sales resources can be applied to them effectively. You’re controlling their “trajectory”, and so I’ve called this “trajectory management.” A company would progress through the funnel, changing from “suspect” to “prospect” to “customer” with the application of specific pressures, and it’s creating and managing those pressures that form the basis for an effective marketing/sales program.

In today’s world, the first of these pressures is editorial mentions. A suspect sees a news or opinion piece that mentions a source of goods or services, and because they’re “interested” in the topic, probably only because the headline catches their attention, they read the piece. If we were talking about consumer products, we might see this as the decisive step, followed by visiting an online storefront and making a purchase, but obviously business tech isn’t an impulse buy. So what do we expect to happen? That’s a question rarely asked.

Realistically, in my research into buyer progression, what happens is a website visit. I saw a nice article mentioning Company A’s product/service. I was intrigued, so I visited their website. This establishes the first of what we could call the Trajectory Management Truths, which is editorial mentions sell website visits. It’s perhaps the most critical truth in all of marketing/sales, and the one that vendors mess up most often.

I get pitches for stories all the time, even though I say on my website that I won’t take any suggestions. Vendors and service providers want me to talk about them and send me what they believe is the compelling reason, and way over 90% of the time, it’s garbage. What they’ve done is violate the second Trajectory Management Truth, which is you cannot take a sales message into a marketing channel. Marketing is about mass distribution of information to prepare a company for a deal, not the process of taking an order, yet most of these pitches are sales messages. No outlet is going to accept a piece long enough to carry a convincing sales message, and nobody would read it if they did. The pitches needed to focus on getting an editorial mention first, meaning getting somebody to write and run something, then they need to leverage that mention into a website visit.

So, OK, let’s assume that your editorial mention has actually gotten the suspect to your website. What now? Look at the websites of any of the big vendors or service providers today, and you’ll find that there is no correlation between the structure of the website in general, or the homepage in particular, that links to the editorial bias the company is presenting. If you seeded a story about AI, for example, what do you think a reader of that story is looking for on your website? AI, obviously, and my research says that if they can’t quickly navigate to that topic and find something interesting (three or four steps, maximum), they’ll abandon their effort.

Any PR campaign needs to be anchored on the homepage. You pick up the theme of the PR and editorial slant and you run with it. If you do that, then editorial mentions, which sell website visits, can carry you to the next step in the process. Which is…?

Which is that website visits sell preliminary engagement for collateral. So let’s say the suspect followed the AI editorial mention to your website, and found the necessary links to the information they now want. What information is that? Not product specifications, not at this point. It’s marketing collateral, things like white papers or videos. This is another step that vendor websites and programs typically mess up, by providing too much undigested information at this critical point. You do not want a prospective buyer to make a purchase decision without ever having sales contact, so don’t give them the information that encourages them to do that.

What do you expect, if not a sale, at this point? The primary thing you want is for the suspect to identify themselves and indicate that they are considering and not just interested. This is where all that Brownian Movement starts to enter the narrower part of our funnel. If you construct your website so that the choices your suspect makes while navigating it lead them on the path they belong on for deal-making, they will consume information that tags them as a prospect and identifies the value propositions they’re most sensitive to. That enables optimum targeting of your next step, which is based on the principle that engagement collateral sells sales calls. The path out of the collateral step is the realization by the prospect that they want to talk to you. Real sales processes can now start.

Based on what? According to my research, successful sales strategies are based on issue ownership. A potential buyer is sensitive to three classes of issues, enablers, differentiators, and objections.

Virtually all enterprise purchases of technology have to meet a business case, and the issues that can make a business case are the enablers. If you do not own the enablers, you have left the most critical aspect of the deal on the table. Your only hope is that your competition doesn’t do any better, because a source that can demonstrate they can help the buyer make the business case can move forward. Enablers are so important that you don’t want to leave them for the last phase of the trajectory; they should be considered as early as possible in the trajectory, even at the level of editorial mention.

Differentiators are the issues that set you apart from other sources of products or services. Unlike enablers, which you have to jump out and seize, you have to take care with differentiators because introducing them invites having your prospect want to know who you’re counterpunching against. They won’t make a business case, so they won’t move the prospect along, so introduce them when it’s inevitable that the prospect has reached the stage where they’re certain to be looking at other sources.

I recently audited a test of a pitch from a vendor who was promoting “sustainability”. My question was “Do you believe that a company will buy your product simply because it’s sustainable?” They admitted they would not. Then I asked “Do you think they’d buy your sustainable product in preference to one that could deliver a higher ROI?” They took a bit more time, but eventually said that wouldn’t happen unless the ROI difference was minimal. They just proved that sustainability is a differentiator, not an enabler.

The most troublesome of the issue types is the objection. Obviously, the term means an objection raised by a prospect to some visible or alleged element of your value proposition. There are two questions that have to be asked before a response can be devised. The first is whether the objection is a signal of a missing enabler or differentiator; have you failed to raise a point when it was important. The second question is where the objection really originated. It’s one thing if the prospect actually raised it, another if it was raised by a competitor, and yet a third if it originated in editorial commentary.

You can’t afford to lose enablers, and if there’s a valid differentiator a competitor raised, you probably want to have an effective response disseminated to the sales organization. There’s a body of reasoning that suggests you should respond to competitive differentiators raised as objections by including them in material delivered to the prospect early in their trajectory, but my data doesn’t show that’s a good idea.

A final point, at the final point in the process, which is the actual sales dialog. Collateral sells sales calls, sales calls sell the product or the service. Most tech sales organizations work effectively if the early stages of the trajectory are handled, but there is one key point that’s easy to miss. Sales is a process of confidence transfer. The prospect, if they’re confident in the salesperson, will transfer that confidence to the representations and to the product. If they are not, then nothing much can be said or done to facilitate a close. This is important because it may be that the greatest value of trajectory management is that it increases sales confidence. Not only does that happen because sales efforts are more successful, there’s less time wasted on bad prospects, buyer education, and so forth. Shortcomings early in the trajectory fall to the sales force to correct, and that is never easy, never fun.

Is 6G Already on a Trajectory to Doom?

Let’s be honest, 5G has been a bit of a disappointment. There were all manner of wild claims made for it, starting when it was a gleam in the eye of standards-writers and continuing until real deployment started. Then people started to realize that the average mobile user would never know they had it at all. They realized that for operators, full 5G Core implementations with network slicing were going to be yet another technology looking for a justification. Then the media realized that they’d said all the good things they could have about 5G, so it was time to start questioning the whole thing…and looking for a replacement.

6G follows the recent trend in generational advances in cellular networking, and if you look at the definitions of 6G (like the one in Wikipedia), it’s hard not to compare them to early 5G promises. 6G is “the planned successor to 5G and will likely be significantly faster.” Wasn’t that the big deal with 5G versus 4G? Or “6G networks are expected to be even more diverse than their predecessors and are likely to support applications beyond current mobile use scenarios, such as virtual and augmented reality (VR/AR), ubiquitous instant communications, pervasive intelligence and the Internet of Things (IoT).” Again, wasn’t that the 5G story?

The truth is that we’re already on track to create a 6G mythology that will lead to hype, exaggeration, and disappointment. Is it too late to save 6G, and is there really any reason to save it at all? Let’s try to answer that question.

6G, like 5G, moves cellular to higher frequencies, higher cell capacities, and a larger number of potential user/device connections. It will likely lower network energy consumption, improve scalability, and it will perhaps allow for tighter integration between the network and contextual services, to use my term. If you wanted to offer a realistic summary of what to expect from 6G, that summary would be that it relieves limitations that would hit advanced applications based on 5G, as those applications became pervasive. That, it turns out, is the critical point about the whole 6G thing. 6G is in a race.

As I’ve pointed out in many of my previous blogs, there exists a “digital-twin-metaverse-contextual-services” model that could evolve to tie information technology more tightly into our lives and our work. In that model, we would all move through a whole series of “information fields” that could supply us with critical information at the very point where we interact with the world (and job) around us. These fields and our ability to use smartphones to tap into them would build that “hyper-connected future” that some vendors are already pushing. The problem with that exciting vision is that such a framework needs a whole lot more than a new generation of wireless. In point of fact, what we could really do in the contextual services space today wouldn’t likely tax even 4G, much less be sitting around and waiting for 6G to explode onto the scene.

As we wait, as 6G waits, it is subject to the same trajectory to doom that 5G followed. You start with unrealistic claims, convert them into wild expectations, stir in a lot of hype, and end with disappointment, disillusionment, and finally the worst thing that can happen to new technology, boredom. We are already on that path today, pushing the characteristics of something that has absolutely no technical substance yet. But whatever it is that we need in networking, be it excitement, profit, transformation, or whatever, we can assign to 6G. Because it has no technical substance, you can’t disprove any claim you make, and the wildest stories get the most publicity.

How is this a race? Well, 6G is following its trajectory-to-doom, and at the same time we’re starting to glimpse the elements of the sort of “contextual services” that could eventually exploit, even drive, it. We can’t justify radical technologies except through the exploitation of radical business cases. You can’t evolve to a revolution, so we either abandon the revolutionary or abandon safe, comfortable, and above all, slow progress in creating justifications. The question is whether the contextual framework can outrace that doom, and sadly, it’s not a fair race at all. 6G hype is likely to win.

The problem with contextual services can be explained by asking just what that “hyper-connected future” is connecting and what value that connectivity is bringing. Technology advances that don’t frame any real benefits and that can be deployed only by incurring real costs tend to stall out. That’s been the problem with all the new network services of the last couple of decades. I have offered, through contextual services, an example of what a hyper-connected application would look like, and think of all the pieces that are needed. You need contextual applications, you need sensors, you need “information fields”, you need network agents representing users and other elements, you need edge computing…the list goes on. If you have all that and it’s adopted, then 5G would indeed have to give way to 6G, but if deploying 6G is a major investment, then what kind of investment is needed for all that contextual stuff?

You can’t justify a new technology by citing the driver that other technologies create, if those other technologies also need justification. That’s particularly true when you’re trying to justify something that’s on the tail end of a long requirements chain. 6G, when you cut through the crap, is something that does more of what 5G was supposed to do, and we haven’t managed to get 5G to do it all yet. Or even a little of it.

We have technology in place to start developing contextual services. A team of maybe three or four good software architects and two-dozen developers could be working on a prototype by the end of the summer. We could identify places where the hosting and connectivity resources are available already, and where the value proposition for contextual services could be readily validated (or proved to be nebulous). If we expended a fraction of the resources that 6G will surely suck up on a contextual-services model, we could actually advance not only 6G but edge computing.

The notion of supply-side market drivers presumes one critical thing; pent-up demand. If you offer something that involves a significant “first cost” deployment investment, you have to assume that you’ll sell something that generates ROI, and quickly. But pent-up demand is something we can usually measure and model, and the most telling symptom is what I’ll call the “verge clustering” phenomena. If Thing X is needed and there’s really pent-up demand for it, then users/applications will cluster on the verge of Thing X, the place where it’s possible to get as close to the needed element as possible. We should be seeing 5G insufficiency in order to presume pent-up demand for 6G, and in fact we’re still trying to justify 5G.

Sadly, the same thing is true for contextual services, or perhaps even worse than the same thing. With contextual services, it’s hard to even define what “the verge” is, because we’re largely ignoring the concept. We talk about IoT, which is just a sensor-and-sensor-network technology, not an application. We need to talk about the utility of IoT, and of 6G, and of contextual services.

Why don’t I fix this, or try to? I’ve written a lot about the cloud, the edge, 5G, and more, and I think that most of my readers will say that I’ve been on the right side of most of the issues that have developed in any of these areas, well before they were widely accepted. I think I’m on the right side of this one too, but I’m a soothsayer; I can’t move mountains, only predict where they’ll go. But if you have a respectable plan for moving a bunch of rock, I’m willing to listen.

What Can We Say About the Potential For Edge Computing?

What, exactly, is the potential for edge computing? It seems to me that we’re seeing more of a need for the “what-is-the-potential” or “what’s-the-justification” sort of pieces every week. Most of that, I think, is generated because there’s an endless appetite for exciting things, and less so for true things. The problem is that excitement doesn’t translate directly into technology opportunities. Is there any opportunity for edge computing?

The popular themes on edge computing make points like “the edge will be much bigger than (or will replace) the cloud” and “business applications need very low latency to be effective.” We even have “the edge is eating the Internet.” None of these points are valid, but for the first two, we could make similar statements that would be true, so maybe the best way to open this is to contrast the stuff that’s not real with that which is.

The size of the edge opportunity is a good place to start an edge discussion. Edge computing is the subset of cloud computing that deals with a need for exceptionally short control loops, largely because of real-time, IoT-like, applications. There are many such applications today, but they are realized using “edge” facilities on the users’ premises. There is IMHO no reason to believe that these current edge applications would move to a public edge computing model, any more than there is to believe that everything is moving to the cloud. I don’t know any serious application architect who believes otherwise.

So how much of an opportunity is there? Today, it’s in the noise level. However, I’ve been saying for a decade that there is a class of new application, a new model of applications, that I’ve called “contextual services”, and this new class could generate significant incremental revenue in both the business and consumer space. If that new class of application were to be deployed optimally in both spaces, it could justify the deployment of about a hundred thousand new edge data centers. The great majority of them would be “micropools” of resources, though, comprising a couple racks of servers. In terms of real edge hosting revenue, the potential of these pools would be roughly three-quarters of what I believe the total revenue potential for cloud computing would be.

If close to the user is a justification for the edge, because latency is a barrier to an application, then with the user is better than any other place you could put edge resources. Today, industrial, warehouse, and process control edge applications run (according to enterprises) an average of 450 feet from what they’re controlling. Ten miles of latency is what you might expect, optimistically, for the distance to a reasonably efficient public edge pool, and that’s almost one hundred twenty times as far.

Couldn’t you run today’s cloud applications at the edge, though? Isn’t that a reason for edge computing to displace cloud computing? Yes you could, but no it isn’t. The problem is that edge computing economies of scale are limited by the relatively small population that are close enough to an edge point to justify using it. Micropool, remember? Public cloud computing offers better operations and capital economies than you could enjoy at the edge.

But what about the benefit of low latency to those cloud applications? Human reaction time is around 300 ms, which is 6 times the round-trip delay associated with transcontinental fiber transport of information. So let me get this straight. You want to deploy thousands of edge data centers to reduce transport delay that’s already a sixth of human reaction time? At the least, it would be cheaper to improve the connection from the users’ access network to the cloud.

That raises the second point, which is that business applications only work with low latency. Well, we’ve built a multi-trillion-dollar online economy on much higher latency. Transactional applications work at a global scale today without edge computing. We can take orders, support content delivery, and manage businesses with applications that haul data all the way back to a corporate data center from anywhere in the world. How much latency do you suppose that involves?

So, you might be thinking, that’s it for edge computing. Not so. As I said, you could modify the statements I opened with to create new ones that could well be true.

“The edge will be bigger than the cloud”, for example, could become “The potential growth in edge computing revenue is bigger than the potential growth in cloud computing revenue”. That’s true. “Business applications require very low latency to be effective” becomes “New applications that would transform business productivity require very low latency to be effective”. That’s also true, and the combination of those truths is what either justifies the edge or proves an impossible barrier for it to scale.

Why is low latency important for an application? The answer is that the application has to be synchronized with real-world activity, both in terms of sensing it and in terms of controlling it. There is no other meaningful application for low latency. Why can’t that be achieved with local processing? Answer: Because available local process points cannot do the job. When would that be true? Answer: When users are highly distributed and users are mobile.

What I’ve called “contextual services” are information services targeted to users moving around in the real world, wanting computing/information support for their activities there, delivered automatically in response to a service knowledge of what the users are trying to do. The only credible conduit for the delivery of a contextual service is a smartphone (perhaps married to a smart watch), which clearly can’t host a lot of application logic.

Contextual services require an “agent” process that runs in the edge/cloud and collects stuff for delivery, as well as interpreting contextual information either received from the user’s phone or is inferred from user searches, calls, texts, location, and so forth. To be relevant, it has to be able to absorb context from the user (directly or indirectly) and deliver insights that are real-time relevant.

So isn’t this a business application that demands low latency? Yes, but it’s not a current application, or even a justification for an application that could be considered validated by demonstrated opportunity and supported by available technology. Contextual services are created from an ecosystem, no element of which is currently in place. To say that they validate the edge makes too many presumptions, even for someone like me whose modeling has predicted that the market would be justified by the mission. We need to see more of the contextual services model before we can start using it to justify edge deployment, or claim it could rival the cloud.

I object to the kinds of statements I’ve opened with, because not only are they not currently true at all, but also because what’s needed to make them sort-of-true is a long way from happening. And, sadly, the statements could hurt the evolution of the real opportunity by promoting the idea that a mountain of real work doesn’t lie ahead, when it does. It will stay a mountain if we never get started because we think we’ve already climbed it.

Why are Operators Forecasting a Capex Crunch, and What Could be Done?

It’s tempting to attribute the growing interest in capex reduction among network operators to the current economic malaise. The problem is that the push to buy less gear, or at least spend less on it, has been around for well over a decade. The recent interest in capex plans by AT&T and Verizon, then, are investor reflections of an old problem, one that investors can’t solve themselves, and that capex reduction may not be able to solve either. And there are new problems to confront too.

Over the long haul, operators have typically spent about twenty cents of every dollar on capex, but the current trend is slowly downward. My 2022 data says that the industry was targeting a bit more than 19 cents per revenue dollar for 2023, which doesn’t sound like a lot but which is alarming many of the vendors who depend on network operator spending for a big chunk of their revenue.

The reason for the dip is what can be only described as an alarming negative trend in return on infrastructure, most of which can be attributed ironically to the most successful network venture of all time, the Internet. Broadband Internet has transformed our lives and the fortunes of operators, largely because it enabled the development of a host of new applications that operators have largely been unable to profit from. Yes, many will claim that regulatory limitations kept them from getting into these areas, but even without the barriers, operators have largely been unable to make a go of higher-level services, including content. Orange recently announced it was selling off its content unit, one of the pioneer ventures in the space for telcos, and AT&T’s retreat from the TV business is well-known.

When you ask operators why they’re under pressure in 2023, the most common response is that “5G didn’t generate the expected bump in revenue”, and while there’s some truth in the answer, it’s the truth behind it that really matters.

Historically, users have tended to spend a fixed portion of their disposable income on communications services. While 5G offers clear benefits to a telco in terms of capacity per cell, the value of its supposedly higher speed turns out to be hard to see from the perspective of the 5G consumer. Yes, mobile downloads might go a bit faster with 5G, but with phone storage limited and the value of downloaded files on a small device at least as limited, most users didn’t find much to sing about. That meant that there was no appetite for higher 5G charges.

The question is why operators didn’t see this, and continue to miss more subtle 5G benefit shortfalls like the value of network slicing and 5G Core. There’s no question that a big part of the problem is that an industry that has thrived for over a century on supply-side thinking doesn’t get a strong handle on consumerism. There’s no question that self-delusion is another part of the story, and all technology companies tend to push new technology stories if they think it will help them with Wall Street. And yes, the media plays a role by playing up sensational stories about what something like 5G will do, on the grounds that the truth is rarely exciting enough to generate clicks. But every single network operator I’ve chatted with has senior planners who are frustrated by what they see as clear failures to exploit opportunities.

The cloud is a good example. Verizon bought Terremark in 2011, which gave it an early foot in the door of what’s arguably the most important technology sector of the decade, then sold it off six years later to IBM. While IBM hasn’t set the cloud world on fire, they’ve certainly earned a place for themselves in recent years, so why couldn’t Verizon do the same? Sure, telcos are conservative companies, but who ever said IBM was anything but conservative? Verizon also sold off its data centers to Equinix, which got them out of another aspect of the cloud business that’s now hot on Wall Street.

Is there something fundamental here? Some reason why the network operators have difficulties with anything but same-old-same-old? Maybe. Some have suggested that since they were late into the cloud, they couldn’t compete with other providers on economy of scale, but as I’ve pointed out, economy of hosting follows an Erlang curve, so it doesn’t improve continuously as the size of the pool grows. I think the problem is a different kind of economy, economy of scope.

Network operators have the great majority of their staff and real estate concentrated in their home market areas. If you live in the northeast US, you’re probably only a few miles from some Verizon facility, and same for AT&T if you live in California. You need buildings to house cloud resource pools and staff to operate them. A credible cloud offering has to be at least national, and increasingly multinational in scope. In order to make a real entry into the cloud market, an operator would have to either spread out beyond its own staff-and-facilities border (at great cost) or “federate” with other operators, something like was done in telecommunications through interconnection. This doesn’t just impact the cloud, either. It’s hard to think of a “higher-layer” service that isn’t built on hosted software features, which of course require something to host them on.

Federation to support higher-layer features was actually one of the goals of the IPsphere Forum (IPSF), a body I was heavily involved in over a decade ago. Little has been done since then, and even in the public cloud space you don’t see much interest in federation for the same reason that it’s not all that hot among operators; big operators don’t want to enable more competition.

Because mobile services have a much broader footprint than traditional wireline, it’s tempting to ignore the issue of footprint when talking about creating resource pools, but I don’t think that’s a valid connection. Things like tower-sharing have been in place to reduce the cost of extending mobile infrastructure, and few operators have deployed significant incremental facilities to support out-of-region mobile operations. In any event, mobile services are not feature-differentiated; operators are either vying for price leadership, total coverage, or the best phone deals. Mobile experience, then, isn’t contributing to operator literacy.

Mobile isn’t contributing to profit relief, either. 5G, in my view, may turn out to be more of a curse for operators than the blessing they’d hoped for. They had to increase capex to deploy it, and yet it hasn’t opened new market areas as they’d hoped it would. How much of the current capex crunch started with that low-return 5G investment, an investment that looks increasingly unlikely to yield any meaningful incremental return?

Operators need to be thinking about all of this, because as I noted in a blog earlier this week, the cloud is encroaching on the whole model of “higher-layer” services, to the point where network as a service may end up being a cloud application and not a network service in the sense that operators offer it. If that happens, then getting out of the profit basement may be very difficult for operators indeed.

Has NaaS Crept in From an Unexpected Direction?

Will this, as an SDxCentral article suggests, be the year of NaaS? I think somebody has suggested that every year for the last five years or so was going to be just that, so it might be interesting to look at why we’ve not achieved NaaS so far, starting with an inability to define what it is or what its value proposition might be. NaaS, of course, stands for “network as a service”, and one thing the article does is outline an actual driver for the concept, a driver that takes a step toward developing a definition. It also ends up introducing a question, which is whether NaaS has already snuck up on us, from an unexpected direction.

One of the problems with NaaS is that most network services are as-a-service models already. Look at MPLS VPNs, which frame what used to be private networks built from digital trunks and routers into a service that replaces all that interior plumbing. The article notes that “NaaS replaces legacy network architecture like hardware-based VPNs and MPLS connections or on-premises networking appliances like firewalls and load balancers. Enterprises use NaaS to operate and control a network without needing to purchase, own, or maintain network infrastructure.” That could be interpreted two ways. First, NaaS extends the VPN concept to network-related on-prem technologies like firewalls and load balancers, so it’s more comprehensive. Second, NaaS allows users to expense their networks rather than build them with capital equipment.

Companies like Cisco tried to offer that second option by “selling” equipment via an expensed service, so the architecture of the network didn’t really change but the way it was paid for did. Some companies like this approach, but it’s not been a stirring success, and it’s not accepted as NaaS by the majority of enterprises. Thus, it’s the first NaaS interpretation that seems to matter.

The reason for that is the increased mobility of workers. Work-from-home during the lockdown not only created an almost-instant need to empower people who were in places that fixed network connections didn’t serve, but also convinced many workers of the lifestyle benefits of out-of-office work. That meant that somehow these workers had to be connected effectively and securely. And since mobile workers are mobile, the mechanisms of NaaS would have to be agile enough to respond to changes in worker location. Arguably, they should also be elastic in terms of capacity and other SLA attributes too.

The basic notion of NaaS as presented in the article is that NaaS would absorb into the network the edge elements that today are on-premises. If you think about this model, it’s hard not to see it as either an implementation of an old idea or the target of a new technology that’s not really considered network technology at all.

The old idea is network functions virtualization, NFV. If you’re going to eliminate on-prem capital equipment it makes zero sense to install the same gear in an operator location; you’d have to charge more for NaaS if you couldn’t deliver any technology efficiencies to cover operator profit margins. Thus, the assumption is that you would host virtual forms of those previously premises devices, which is what NFV was all about.

The interesting thing, though, is that since its inception in 2012, NFV has evolved toward the “universal CPE” or uCPE model, which relied on a premises device to serve as a kind of agile hosting point for service features. This doesn’t address the problem of mobile workers, but uCPE isn’t mandatory for NFV, just a focus of convenience. The question is whether NFV is really the right approach to NaaS, given that it’s not exactly swept the market over the last decade.

That’s particularly true when you consider that new technology I mentioned, which is the cloud.

Companies have been adopting cloud computing to create “portals” for prospects and customers, front-end elements to their legacy applications. Since prospects and customers can hardly be expected to install specialized hardware on their premises, and since companies are unlikely to pay to do that for them, this portal model demands that the business network ride on a universally available service, the Internet, and that the cloud provide things like security and even VPN access. During the lockdown-and-WFH period, companies started to play with the idea of using the same strategy for worker access.

Cloud tools like SASE and SSE are examples of the absorption of network features into the cloud. The fact is that we’ve been developing what the article defines as NaaS as an element of cloud hosting. Public cloud providers are, in effect, doing the absorbing of those additional on-prem devices, and as I’ve noted in other blogs, the “cloud network” is increasingly a full implementation of NaaS.

So what we seem to have here is a realization of NaaS that’s not due this year, but has already happened. If you look at the pace of cloud adoption of network security and VPN features, we could argue that 2022 was actually the “Year of NaaS”.

This raises the question of whether we’re seeing perhaps the ultimate form of “disintermediation”, where network operators find themselves excluded from a potentially profitable service set. Through 2022, almost three-quarters of operators told me they believed NaaS was a logical extension of their current service set that could raise revenues. I think they were correct, but I also think that operators sat on the sidelines as the NaaS concept was implemented downward from the application side.

Ultimately, networks connect stuff, they don’t create stuff, and it’s the “stuff” that businesses and people want. With the advent of the Internet, we saw a fundamental shift in the mission of networks. In the past, it facilitated connection, presuming that what it connected took care of the “stuff”. When the Internet came along, it bundled connection and experience, network and “stuff”. The cloud built on that. But operators didn’t recognize the importance of the Internet transformation, other than to bemoan its “disintermediating” operators from higher-value services. All the factors that could have driven us to “smart” networks if operators had recognized conditions, drove us instead to consign advanced network features to that “stuff” category, a category operators had already lost.

The cloud is an external-to-the-network resource. The Internet represents universal connectivity. Combine the two, which has already happened, and you get what’s arguably the most logical, efficient, and ubiquitous platform for delivering NaaS. This means that operators disintermediated themselves, and failed to realize a fundamental truth about the supply-side, build-it-and-they-will-come nature of telco planning. You have to build it, or they will come to someone who does.