Cisco, VMware, and Growth in the Telco Market

If we assume that 5G will continue to drive telco spending this year, what specific part of telco spending is getting the push, and what vendors are benefitting. According to an SDxCentral piece that cites an analyst report, the answer might be pretty revealing, but not necessarily in exactly the way that the report suggests. There’s a bit of mixing of statistics in the data that can be confusing.

Let’s start by saying that 5G technology isn’t a uniform opportunity because it’s not uniform itself. We have RAN and Core, we have Control Plane and User Plane…you get the picture. At a higher level, we also have the 5G Non-Stand-Alone or NSA that lets 5G ride on an Evolved Packet Core for 4G, and we have 5G Stand-Alone (SA) where 5G RAN marries 5G Core. Generally, specialized mobile vendors like Ericsson and Nokia have played better in the RAN space because of their knowledge of the radio piece of the puzzle, but how well they can leverage that into the Core space, when Core is more about transport, is an open question.

Then there’s the fact that article I reference isn’t purely about 5G, but about telecom equipment and spending in general. It notes that Cisco dropped in market share for the telecom space in 2022, which was a surprise given that in 2021 it was a big market share gainer. VMware managed to be one of the largest share gainers. Obviously there’s a difference between revenue rankings and revenue growth rankings for vendors, and also major differences in the way that vendors fit into telecom networks overall.

One thing that seems clear is that it’s important to look at 5G spending here rather than smear the story over a mixture of network technologies. One big reason is that 5G is budgeted and everything else has to compete for sustaining dollars. I’d argue that one reason for Cisco’s problem in 2022 was that their primary telco products are routers, and routers are a piece of a narrow chunk of 5G infrastructure. VMware, in contrast, is targeting the 5G space for most of its growth, and that’s the space with broader potential. Microsoft was the biggest gainer in terms of growth of market share.

Still, given that 5G RAN generally deploys ahead of 5G core because 5G NSA can leverage 4G packet core infrastructure, that most deployment news seems focused on 5G Core (5G SA), and that if there’s any place where a router vendor could expect to play in 5G, Core would be the place, you have to wonder why Cisco hasn’t gained share. The answer lies in the Control/User plane separation that’s fundamental to not only 5G but 4G as well.

A mobile core (4G/LTE or 5G) has both a user-plane and control-plane element, at least at the edge where it connects with the RAN. The 5G Core connects with the RAN via what’s called a “backhaul” connection to an edge-of-core element that is 5G-specific. Within the core, it uses traditional IP routing. The majority of mobile-specific routing doesn’t take place in the core at all, but out in what 5G calls the “fronthaul” or (in currently favored distributed DU/CU architectures) the “mid-haul” area. This “routing” is a 5G user-plane activity, meaning that it’s done using a UPF and is tunnel-aware. That means that the really 5G-specific stuff isn’t really in the core at all, and that the mobile core is going to exploit routing capacity that’s already in place unless 5G drives up traffic considerably.

Some had predicted 5G would do just that, but the truth is more complicated. Let me offer an example; you’re watching an HD movie on something. That movie has a typical characteristic bit rate of about 8Mbps. Suppose you’re using it on a service with a bit rate of 10 or 20 or 100Mbps. What’s the video bit rate? The same 8Mbps. You don’t push bits into a device, it consumes them based on the nature of what it’s doing. Given that, there’s not a major chance that video bandwidth consumption would be increased by 5G, and video is by far the largest traffic source, and source of traffic growth.

What this means is that Cisco has relatively little change of seeing a big revenue kick from 5G Core even if we could assume they’d win the business. But the fact that 5G Core isn’t really a driver of router opportunity, and that the 5G specificity of the Core is really limited to an edge-of-core or on-ramp function, means that the UPFs used in the RAN mid-haul could also be used in the Core. That in turn means that an operator is likely to use the same core-side UPFs as they used in mid-haul, and they probably got those from their RAN vendor, which almost certainly wasn’t Cisco. There’s a double negative here, a double negative influence that is.

Then there’s the fact that Cisco is the router market share leader, so they have more deployed routers than competitors. Marginal gains in 5G traffic, which as I said are about as much as we should expect, aren’t going to add much to capacity requirements, and that which is added won’t have a major impact on Cisco’s revenue line just because it’s bigger. Triple-negative.

VMware, on the other hand, is a comparative midget in the telco market. A million-dollar gain in revenue from 5G for Cisco could hide in a rounding error, but it might be a big increase for VMware because they earn less from the market. VMware is also focused on open RAN, which is where all the 5G-specialized opportunity is, and as I’ve already noted, Cisco isn’t. Thus, their gain from 5G is unsurprisingly bigger.

There’s a final point here that’s perhaps even more important. Going back to a blog I did on an Omdia study on telco capex, the majority of capital spending is focused on the access network, not the core. In 5G, even 5G FWA, “access” means RAN and mid-haul. If you want to look from growth in 5G opportunity other than radio-linked stuff, you’d really have to look at Open RAN and function hosting, which in my view means looking at metro and edge computing. The opportunity for vendors in 5G may well be only metro-deep.

Can Openness be Merchandised, Even in Networks?

Everyone loves open technology, except of course vendors who have to compete with it. Still, even vendors seem to embrace it or at least rely on it in some areas, and there’s growing interest in having open technologies drive us into areas where innovation seems to have stalled out. With all of these positives, though, we have our share of negatives. One is the “mule is a horse designed by committee”, a second is “generalized tools can be more expensive than specialized ones”, and another is “you can’t monetize something you give away.” Can we overcome these barriers, and are there more waiting for us?

There’s an old adage that says “The IQ of any group of people is equal to the IQ of the dumbest, divided by the number in the group.” Putting this in a more politically correct way, it says that groups require cooperative decision-making, and that requires compromises to accommodate everyone in the group, which is harder as the number increases.” Anyone who’s been involved in open-source projects or standards development has seen this, but we seem powerless to eradicate it.

Some have suggested to me that the solution is to have a single person or company launch something and then “open” it, which means that the broad membership inherits an approach set by a more controlled group of people. I’ve seen that work and also seen it fail, so I don’t think that’s the solution. The real problem, from my own experience, is that projects of any sort that get off to a bad start are very difficult to turn around. A large group, having necessarily committed a large effort, don’t want to invalidate their collective work. You’ve got to start right to go right.

How, though? My best answer is to say that an open project should begin by having a single insightful architect frame the approach. Of course, identifying who that might be is a challenge in itself. An alternative is to create a number of sub-groups (no more than four) and have each contribute a high-level approach model, which for software would mean an “architecture”. The group would then discuss the advantages and disadvantages of each, and pick the model. Then, the full group takes the idea to the next level, and if it’s going well at that point, a commitment to the approach is formalized. If not, one of the other models is picked and perhaps refined based on the lessons learned.

What this seems to do is eliminate a problem that’s dogged the footsteps of virtually every network-related project I’ve worked on, which is approaches biased by old-think. When there’s a model for something in place, as there is with networks, there’s a tendency to think of the future in terms of the present, the familiar. I’ve seen three highly resourced new-network-model projects toss away much of their potential value through that fault. One was ONAP and another NFV, by the way. None ever recovered, so that’s why it’s critical not to have the problem solidified into a barrier at the start.

The second issue could be called “the curse of generalization”. NFV had a bit of this from the first, with the original goal being to transform networks by hosting virtual functions on general-purpose servers. Well, general-purpose servers are not the right platform for the most demanding of network functions, and perhaps not for very many of those that live in the data plane. White boxes with specialized chips are better, and recently it’s been reported that the cost of general-purpose CPU chips is so much higher than the cost of a specialized and even proprietary CPU that it prices “open” devices out of the market.

This problem is more insidious than the chip example, though. Software design is a delicate balance between a generalization that widens the scope of what the software can do, and a specialization that supports the initial target mission most efficiently. We see in today’s market a tendency to look toward “cloud-native” and “microservice” models for something because they’re more versatile and flexible, but in many cases they’re also alarmingly inefficient and costly. I’ve seen examples where response times for a general solution increased by a factor of 25 times, and costs quintupled. Not a good look.

These are both major concerns for open-model work of any sort, but the last of the three may be the most difficult to address. Something is “open” if it’s not “proprietary”, so open technology isn’t locked to a specific supplier, but free to be exploited by many. Given that, how does anyone make money with it? In the old days of open-source, companies took source code and built and supported their own applications. Even this approach posed challenges regarding how participants could achieve a return for their efforts, without which many key contributors might not sign on. Add in the growing interest for open-source tools among users less technically qualified, and you quickly get support problems that free resources can’t be expected to resolve.

We seem to have defined a workable model to address this problem in the server/application space, the “Red Hat” model of selling supported open-source by selling the support. However, the model fails if the total addressable market for a given open element isn’t large enough to make it profitable to the provider. Still, it’s worked for Nokia in O-RAN; their quarter disappointed Wall Street but they beat rival Ericsson, who’s less known for open components.

The big question that even this hopeful truth leaves on the table is whether a broad-based network change could be fostered by open network technology. O-RAN hasn’t been exactly the hare in the classic tortoise-vs-hare race, and the broader networking market has a lot of moving parts. But Red Hat surely supports a hosting ecosystem that’s even broader, so are we just waiting for a hero to emerge in the open network space? I know both Red Hat and VMware would like to be just that, and maybe they can be. If VMware were to be acquired successfully by Broadcom, the combination might jump-start the whole idea.

Fixing Operator Developer Programs (and Knowing When They’re Not Fixable)

Everyone seems to love developer programs. They promise that some legion of busy programmers will rush to create stuff using your service or product, thereby driving up your sales while you sit back and rake in the proceeds. What more could you ask? Well, how about a rational developer strategy? For operators in particular, that seems to be illusive. As a software architect and a participant in both some developer programs and some initiatives to create them, I’ve got some views on what might be involved in creating one.

Everyone who wants to start a developer program starts with the application programming interfaces (APIs) they’re going to expose and publish, and those are important at two levels. First, APIs represent features that can be accessed and composed into valuable services. The value of the services will depend in part on the API features exposed, but most important, the value of an operator’s contribution to the program is determined by those features. If you expose useless features then you contribute nothing the developers can exploit, and your program is doomed to failure.

One classic mistake operators make here is focusing on what they’d like developers to do rather than on what developers can actually profit from doing. Often that happens because they simply make an inventory of features that they could expose without much regard for what using those features could mean in terms of benefit to the developer. Remember, the developer is trying to create an application that uses the APIs, and hoping to sell the application or a service/services based on it. It’s important to consider that developer end-game if you want to attract developers.

Most operators look at two areas for feature exposure via APIs. The first is connectivity services, meaning voice, data, or message connectivity, and the second is management and operations services, meaning features of OSS/BSS systems. Both these areas have value, but in today’s market neither likely has enough value to actually assure operator developer programs of success. To understand why, we have to look at the way a developer would build on APIs to create value.

There are two models of developer applications that could exploit an operator API set. The first model is the app model and the second the cloud model. The app model says that the developer is exploiting the APIs in an app written for a mobile device or run on a system attached to the network, and the cloud model says that the developer is exploiting the APIs through a cloud application. Both these models have potential limitations.

One common limitation is that the likely geographic limitations to accessing operator APIs. Where are the APIs actually hosted? An operator-specific API likely has limited value if the user of the developer’s application/service isn’t in the geography of the operator or on the operator’s own network. A limit in the geography from which customers can be drawn means a limit in revenue, and in some cases limited API geographic scope can hinder effective marketing. If a developer has a plan to exploit a wider area than the APIs can be extended at reasonable QoE, they’d need to support multiple operator relationships, which means you’d need either a standard API set or operators in a regional market would have to agree to a common API model. Otherwise developers would need to build versions of the application for every operator.

Considerations on geographic support limits for the APIs also has an impact on the value of the program, and here is one place where the two possible models of service raise different issues. App-modeled developer programs and applications pose a risk in geographically limited programs because the user may roam out of the area where the app will work, or work properly. With cloud-model development, that risk may not be a factor, but the availability of cloud resources proximate to the user has to be considered. An operator’s developer program may require a single cloud provider to provide effective hosting, and if that’s the case then it limits the developer pool to those familiar with that provider and also may indicate that the operator should specialize their APIs for consumption on the available cloud infrastructure for maximum benefit.

If you analyze the failures of operator developer programs, which I think in the last five years have been more numerous than the successes by a good margin, you can trace the majority to a lack of foundation features to expose. Truth be told, connectivity is a given. Truth be told, OSS/BSS systems administer services, and in most cases any real developer opportunities will (as I’ve noted) either be delivered through an app, the cloud, or perhaps both. Both have connectivity implicit in the platform, and the cloud is replete with management, operations, and administration tools.

The final issue on this topic is the problem of having APIs forcing a legacy development model. In theory, an API can be accessed by any suitable programming language in any application. In practice, the way the API exposes assets creates an implicit application structure, and operators need to consider this when designing their API details.

A simple example here is the “push versus pull” approach. Classic operator-think tends to look at getting status information or other data by “pulling” it, so APIs are designed to ask for something and to having it delivered in response to a request. Event-driven applications expect things to be “pushed” to them. More and more modern development is being done based on event-driven principles, so having a pull/poll type of API runs counter to this, and in some cases makes the whole API set and developer program unattractive.

It’s my view that there is no way to make traditional developer programs a success for operators because they rely on exposing those tired old traditional feature assets. If operators want something more from developers they have to contribute more for the developers to work with.

Is Fiber the Only Path to Universal Broadband? Is There Any Path?

OK, call me a cynic, but I think that we tend to obsess about singular solutions to complex problems. Maybe it’s just a human need, or maybe it’s easier these days to present a single story instead of an exploration of some complex set of requirements and even-more-complex set of possible solutions. In any event, one place we see this is in the broadband space in general, and in particular how subsidies to support underserved and unserved users might best be applied. The most popular answer seems to be “fiber”, but is it the right one?

Light Reading cites a study by a wireless broadband supplier that calls a pure-fiber approach into question. Is this just self-serving and opportunistic, or is there a real question of how far you can take fiber, even given government willingness to kick in subsidies? A systemic approach is needed, and I think that has to recognize that there is no single answer to how to promote quality broadband for most markets.

What is the best technology for broadband? The one with unlimited (essentially) potential capacity? Why, given that the average household cannot really justify more than roughly 100Mbps broadband? Why, when operators are already under considerable financial pressure delivering even the current broadband services to the current users? The best solution to any tech problem is the one that delivers the best return on investment, because without good ROI there isn’t any deployment. So what does that say about fiber broadband?

The ROI of broadband depends on the cost of serving a population and the revenue the served population could generate. Generally speaking, as I’ve said for decades, those things depend on demand density, which is roughly the dollar GDP a mile of infrastructure could pass in a positioning deployment. Demand density is largely a function of population density and household income, and it varies considerably depending on both these metrics.

Could you trench a mile of fiber to serve one user? Hardly. Even half a mile per user would be an unbearable cost. The baseline strategy for fiber deployment is to “pass” households with baseline infrastructure than can then be connected to households that elect to purchase your service. If households are dense enough that works, which means that fiber is really a community strategy, and probably a strategy that requires some population density within the target communities.

If you look at the US market, the top ten metro areas in population each have 5 million or more inhabitants. Dip down to areas with a population greater than 1 million and there are roughly 56. At the half-million or more level and you have 113, and there are roughly 360 metro areas with a hundred thousand or more people. Only about 1,600 communities have a population greater than ten thousand, of which over a thousand have populations less than 25 thousand. The number of households in any community tends to be roughly 40% of the population.

Population density and household income correlate fairly well with willingness to pay for broadband services, and the household numbers of course correlate well with the cost of providing those services. When we have a concentrated population, a large number of people per square mile, the total revenue potential is higher per unit area than when population density is lower. When we have a large number of residential units per unit area, we have more efficient infrastructure to deliver broadband.

The reason this is important is that most fiber experts I talk with tell me that it is difficult to justify fiber broadband in communities less than ten thousand in population (4000 households), because the cost of deployment can’t be recovered fast enough from broadband subscription revenues. Roughly twenty percent of the US population live in smaller communities or unincorporated rural areas that would be difficult or impossible to serve with fiber except through a level of subsidies unlikely to secure public and political support.

There are alternative strategies to fiber to the home, of course. Fiber could be deployed to a node or to the curb, with another media then used to haul into each home. That strategy isn’t particularly useful unless you can reduce the cost of the home-haul significantly versus taking fiber the whole distance, of course, and that’s why fixed wireless access (FWA) has gained a lot of traction recently. With FWA you run fiber to an antenna site where wireless, including millimeter wave, can be used to reach homes out to a distance of one or two miles, depending on how fast you want to be and how many obstructions exist. Most operators I talk with will admit that the optimum technology strategy for broadband deployment would be a combination of fiber and FWA.

A square mile is 640 acres. A typical single-family residential subdivision has between four and six households per acre. Condos and clusters roughly double that, and apartment buildings generate ten to twenty times that density. FWA coverage, presuming a one-mile radius, would be 3 square miles or roughly 1800 acres, which would equate to a population of between seventy-five hundred and ten thousand single-family houses. The broadband revenue from ten thousand subscribers assuming an average of $50 per month for broadband would be $500 thousand per month, from which we could allocate $120 thousand to access infrastructure. That’s between $1.1 and $1.45 million per year. The cost of fiber infrastructure to support the population is estimated at between $5.7 and $7.5 million, so the rough payback period is five years, which is reasonable.

However, the cost of FWA for the same community of users is far less, estimated at only a sixth that of fiber, so we could support the high end of our community density with only $1.25 million in deployment costs, and the payback at the same revenue per user is then less than a year. Or, looking at it another way, we could support a population of one sixth of ten thousand, or about 1,600, with the same payback as we’d have with FTTH in a community of ten thousand units. That means we could address an additional ten thousand communities using FWA.

When household densities rise to roughly twenty-five per acre we reach a point where multi-story structures are essential to achieve that density, and these reduce the fiber cost while at the same time raising issues with per-household FWA because of interference of structures. These could be mitigated by having a single FWA antenna per building, of course. However, the number of communities that would fit these parameters is limited. Overall, FWA could increase the percentage of population covered by “quality” broadband from roughly 80% of the population to roughly 94%, according to my model. The remainder are likely beyond reach of any broadband technology except mobile cellular 5G and satellite.

I think that the article and study are correct in saying that the cost of supporting the underserved and unserved households with quality fiber broadband would be excessive. I also think that it would be reasonable to believe that some additional households in areas below the target density that can justify fiber could be served via fiber from the node point where FWA towers were fed, so the actual number of households that could be served by fiber would be a bit over the 80% number. My model suggests that this could serve another 4 million households, leaving FWA to support perhaps ten to twelve million who couldn’t be economically served via fiber. I also believe that of the 80% of areas that could support fiber, a quarter would be better served by FWA because ROI would be better and service competition likely higher.

To return to my opening point, there’s surely a tendency to avoid complexity and complex topics, but that can hide important truths. Demand density has been an accurate predictor of broadband deployment economics for at least two decades, but using it would generate a lot of that apparently unwanted complexity. The problem is that it’s simply impossible to ignore economic reality on an issue like this, and there’s actually some good answers out there…not perfect ones, or always popular ones, but still better than we have now. I hope we can take the time to consider the questions in the right way, so we can find those answers.

Are We Countering Cloud Skepticism the Wrong Way?

Starting with the tech planning cycle that some enterprises start in late fall, and running into February of 2023, I got for the first time hard indication from enterprises that they were not only putting cloud projects under greater scrutiny, but even cutting back on spending for projects already approved and deployed. Public cloud revenue growth reported by the big providers has slowed, too. I’ve blogged about this, noting that enterprises have realized that 1) they aren’t moving everything or even actually moving much of anything, to the cloud, and 2) they’ve been developing cloud applications in such a way as to raise the risk of overspending.

This isn’t to say that the cloud is a universal waste of money. I’ve also noted that there is a real value proposition for public cloud computing, and it’s largely related to applications that have a highly bursty and irregular use of resources. If you self-host these kinds of applications, you have to size your server pool for the peaks, and that can leave a lot of server capacity wasted, capacity that costs you money. If you put the same applications in the cloud, the elasticity inherent in cloud services can let your hosting capacity rise and fall with load.

Some cloud proponents don’t think that’s enough. They believe that there’s a kind of intrinsic set of cloud benefits that should be considered even to the extent that they overcome situations where cloud costs are higher than data center hosting rather than lower. One article opens with a statement of this belief: “We thought cloud was all about cost savings, but that metric was wrong. We need to put a value on the agility and innovation cloud brings, not just dollars and cents.” Another asks “Why did you move to the cloud in the first place? Maybe you were thinking there would be cost savings. But even if you were wrong on that point, it’s the agility of the public cloud that has always been its primary value proposition.” Is it true that there’s more to the cloud than savings? What do enterprises themselves think, and of course what do I believe is the case?

Let’s start by exploring another quote from the first: “Cost savings is a ‘hard value’ with easy-to-measure and easy-to-define numbers. Business agility and speed to innovation are ‘soft values,’ which are hard to define and measure.” Implicitly, this means that the cloud offers a faster way for a business to use IT to address problems and opportunities. The theme here is that we’ve tried to justify moving things to the cloud by demonstrating the cloud is cheaper, when we should be looking at other justifications.

One problem is that we’re not, as I said earlier, “moving” things to the cloud as much as we are doing things in the cloud we might otherwise have done within the data center. When you are looking to cut costs, you have to move to do it, and I don’t believe that in the main the cloud is really cheaper unless you have that elasticity of load I mentioned. So part of the problem of cloud justification may be linked to an invalid vision of what we’re using the cloud for.

When people use the cloud to augment current applications, which is the dominant “hybrid cloud” model of today, what they’re doing is building an elastic, agile, front-end to their applications. In a very real sense, they’re accepting those “soft values” the article talks about. However, that hasn’t meant that they aren’t seeing a need to harden all that softness.

In the last six months, over 80% of enterprises told me that they were “now” requiring some quantification of benefits in order to approve a cloud project. Less than half said they’d required that a year ago. However, even a year ago almost 90% of enterprises said that they justified cloud adoption at least in part by savings versus deploying server/application resources in their own data centers. I could find only 6 enterprises who had ever suggested to me that they believed cloud adoption could be justified by agility and innovation speed alone, and none of my CFO contacts believed that. Since CFOs have to approve a project in most companies, that would suggest that those two values were indeed not accepted in the past. Do we have to assign a specific value, a dollar value, to both? My data suggests that we do. Can we?

We actually have been, and that’s what’s created the cloud-front-end hybrid-cloud model in the first place. Companies have been doing this for roughly four years, and when COVID and WFH hit they accelerated their cloud usage to extend application support to home and mobile workers. The fact is that for at least two years now, the majority of cloud adoption has been driven by a shift of applications toward a web-and-app delivery model. The current trend in the cloud, the trend that led to broader adoption in the last two years, is exploiting the soft-value model, but even that is running out of gas.

We don’t need to validate the soft-value model to gain cloud adoption; we did that already. What we have to do now is to fix the problems with the applications we developed, problems that likely evolved out of the fact that we didn’t impose hard targets for savings and thus overran costs. We’re doing that now, and it’s part of what’s creating the current downturn in cloud growth. The other part, as I’ve suggested, is that the initial soft-value push is running out of gas because the big impetus was WFH. We’re now agile enough for WFH, and WFH is going away.

Here’s a basic truth to consider. Almost all our business applications are designed to support office or “carpeted floor” personnel. Business agility and speed to innovation are more than speed to billing and receiving; in most cases a new product doesn’t really require any significant changes to core business systems. But that’s because what we’re calling “core business” is “core office” and the real business is off our carpeted floors, in warehouses, factories, and other often-dirty places. Truth be told, there’s a massive soft-value proposition, but it involves people we’re not even thinking about when we try to justify cloud spending.

There is about three hundred billion dollars worth of productivity enhancements that could be addressed using IT, and as I pointed out earlier blogs, the majority of this relates to “dirt floor” rather than “carpeted floor” workers, people who are directly involved in the production and movement of goods and the parts that create them. I believe these would be best addressed using a “digital-twin metaverse” model, a model that’s designed to build a virtual representation of complex real-world industrial and transportation systems. These, because they’re really edge applications and often involve movement over some distance, could be the next generation of cloud activity, creating agility where agility is really needed.

Could it be that the very place where cloud features matter the most is the place where we’ve been ignoring using IT to enhance worker productivity? Sure looks like it, and if we those soft values to matter, we need to face the place where they matter the most.

The Telco-and-Cloud-Provider Partnership: Essential but Tricky

There are surely people out there who continue to believe that network operators, meaning telcos in particular, can catapult to profit growth on the back of traditional voice and connection services. There are also people who believe the earth is flat and that politics is a civilized tension between intelligent debaters. In assessment of profit opportunity, as in politics, it’s all about numbers, and the numbers aren’t on the side of the “people” in my example here. Omdia, who’s done some nice research, put out a report that’s cited (among other places) in TelecomTV. I don’t subscribe to other companies’ research, but the topic here is important and so I’ll comment on the summary material that’s widely available.

The core proposition in the report is one I agree with and I suspect that nearly every serious strategist in the industry would also agree with. Telcos can’t hunker down on basic connection services and hope that somehow they’ll be profitable again. That’s true. My research has shown that in the consumer space, there is no hope of profits from basic services. On the other hand, it also shows that there’s a significant profit to be gained in other service areas that telcos could reasonably hope to address. In fact, my numbers are a bit more optimistic than Omdia’s. Where they expect a bit over a $500 billion opportunity by 2027, my numbers suggest that there is actually almost $700 billion in consumer services and another almost $300 billion in business services to be had.

A second foundation principle of Omdia’s position appears to be that the real opportunities lie so far above the network, above those basic services, that it’s the cloud providers who naturally own them. Telcos should expect to partner with the cloud providers, and accept what another commentary on the report might be only “a small fraction” of the revenue potential, because it would be “better than nothing.”

The specific target areas the report suggests include digital music and streaming video, gaming, and smart-home services. It’s the latter that’s suggested to offer the greatest growth potential, and thus present the best opportunity for these telco/cloud-provider partnerships.

OK, I can buy a lot of this, at least to the extent that I agree there is an opportunity and there is a potential to exploit it via a public cloud relationship. However…I think that both the specifics of the opportunity and the specifics of the partnership would have to be considered, and above all that the mechanics, the technology, used to address the opportunity would be paramount in determining whether there was a useful telco opportunity to be had here.

Let’s look at two hypothetical partnerships, which we’ll call PA and PB. Let’s also say that both attempt to address the same opportunity. In PA, let’s assume we have one partner who has nearly all the assets needed to address the opportunity, and thus could really exploit the opportunity themselves. In the other PB partnership, let’s assume that there is at least a set of critical assets that still have to be developed, so neither party can really exploit the opportunity with what they have. Which partnership do you think affords a real balance of opportunity among the partners. Let’s then look at the specific target areas the report cites, and see whether they’re PA or PB opportunities.

OK, digital music. How many digital music services do we already have? Answer, too many to allow any to be highly profitable. What’s the key asset needed for the services? Answer, the music. Imagine a telco getting into this. They might try to market the digital music offering of current incumbents, but there is nothing other than sales/marketing they can contribute. Not only that, other types of business would actually have a better shot at sales/marketing in the space. I listen to one digital music source that’s bundled with another service, and I get another one subsidized by my credit card company. This is darn sure a PA partnership in my view; telcos would gain almost nothing from it.

Digital/streaming video is the same, or perhaps even worse. The essential element is the content. There are already many streaming services and they’re raising prices to try to compete, or they’re dependent on ad sales when competitors all have to scramble for the same (or often fewer) ad dollars. Telcos have already tried to market streaming video services with their broadband, and none of these ventures have taken off. Another PA. If there’s a PB lurking here, it lies in somehow personalizing and customizing both services, which could be an AI application.

Gaming? OK, here we have some PA and some PB flavor to our opportunity. On the one hand, gaming is a kind of content just like music and video, and has the same issue set. On the other hand, gaming in its massive multi-player form (MMPG) is dependent on network QoS to ensure a realistic experience. There is something here, a new dimension that not only is open to a telco to address, but that might be easier for the telco to address. Our first PB element! And is gaming a kind of metaverse? PB for sure if it is.

The same can be said for smart-home services. These services depend on devices, which is a kind of “content” and has a definite PA dimension. There is already a set of hosted services available, as OTT elements, to provide access to the devices from a website, app, or both. Another PA. However, there are service QoS dependencies to consider; if you’re away and your home or local Internet is down, you can’t access your devices. There’s also a question of whether a “smart home” is really a special case of IoT, which might mean it’s a “digital-twin metaverse” and a certain PB.

OK, without getting into the details of metaverse and AI technology, how would a telco play in this? There is a general answer to the question of making a partnership with a cloud provider work, and it was provided by a telco. It’s facilitating services. Forget marketing partnerships; telcos are notoriously bad marketers, and they don’t even operate near the top of the food chain in any of these service opportunities. You can’t sell something if you’re layers away from the buyer. Instead, what telcos have to do is facilitate, meaning build some lower-layer element that makes the retail experience better, cheaper, and easier to get into the marketplace.

Facilitation was proposed by AT&T, so it has operator credibility. However, facilitation requires two things operators aren’t exactly naturals at providing. The first thing is a sense of the retail experience being facilitated. You may not have the experience to sell a widget service, but you’d better understand widget-ness well if you expect to facilitate one. The second thing is a realistic model of the layered relationship you’re creating. Something is “facilitating” if it adds value, but it’s a valuable facilitation only if the cost of the facilitating service is reasonable given the retail price at the top, the distribution of contributed value through all the layers, and the cost to another party of replacing your facilitation with their own.

Omdia is right about the need for telcos to partner with OTTs, but I think it’s critical for telcos to frame the partnership so it doesn’t become parasitism instead. Connection services and telcos are increasingly disintermediated from demand sources overall, created because what connections are used for today is the delivery of experiences. It would be very easy, and very destructive to telco interests, for a partnership with public cloud providers to become a conduit for even more declines in telco revenue per bit, and if that continues then the foundation of modern telecom could be threatened, and some form of subsidization would be inevitable.

Is the Tech Dump the New Norm?

There doesn’t seem to be much good news for tech companies these days. The fact that PC sales are expected to have fallen sharply in the first quarter, with Apple estimated to have lost 40%, sure seems ominous. Is all of tech going to be under pressure? What’s behind this, and when will it end? Those are hard questions to answer, as we’ll see, but we’ll still try.

There are fundamentals issues with tech, of course. The sudden inflation we saw last year resulted in a global trend for central banks to raise interest rates. That tends to impact tech companies because many of them borrow significantly to finance growth. There were also supply chain problems that resulted in backlogs of orders, and that obviously delays revenues. Higher interest rates and inflation also hits developing countries particularly hard because their currency weakens against the US dollar at the same time that inflation drives up prices.

Consumers are obviously pressured by all of this, and that contributes to a reduction in consumer spending on tech. Businesses are pressured because spending pressure equates to profit pressure on them, and that puts their stock price at risk. I believe that a lot of the Tech Dump of 2023, as one vendor friend of mine described it, can be traced back to the issue of stock prices.

Stock prices generally (keep that qualifier in mind, please!) track earnings, which is roughly revenues minus costs. If you want earnings to go up while revenues are going down, then costs have to go down even more. We’ve heard about the tech layoffs, and that’s one aspect of cost reduction. Another, of course, is reductions in other spending, including spending on capital equipment. Since the stuff that’s Company A’s capital purchasing is Company B’s sales, you can see how this can create one of those feedback loops.

The potential for a kind of destructive negative feedback in spending and cost cutting is exacerbated by short-selling hedge funds. Short sales of stocks, unlike traditional “investments” or “long” purchases, are designed to profit if the market goes down, but they can also force the markets down like any wave of selling. For the first time in my memory, we’re seeing investment companies actively promoting themselves as short sellers, and issuing reports to call out short-sale targets. The effect of this is to magnify any bad news, and I believe that much and perhaps most of the stock dump we’ve seen over the last year was created and sustained by short selling. When a stock goes down, whatever the reason, companies try to take cost management steps to boost the price again, and that often means cutting spending and staff.

Even the expense side of the business spending picture can be impacted. One good example is spending on the cloud, which recent reports show has declined at least in its rate of growth. On the fundamentals side, cloud spending is linked to business activity, more so than capital spending on gear, so it responds quickly to a slowing of activity. On the technical side, many companies have realized that they built cloud applications the wrong way and are paying more for cloud services than they’d need to. Thus, they can cut back to reduce costs and help sustain their stock price.

What this all means is that there are a mixture of reasons why tech spending has fallen, and some of the big reasons have little to do with the market appetite for tech products and services. The good news is that these non-demand reasons for spending pressure are relieved when the current economic angst passes. Since January, my model has consistently said that will be happening in May, and I think current financial news is consistent with that prediction. Many reports now say that the worst of inflation has passed, and that the Fed and other central banks are nearly at the end of their rate hikes. Nobody expects prices to go down much (if at all), but both consumers and businesses tend to react more to negative changes than to a steady state that’s worse than before.

There are also segments of the market that seem less impacted by these non-demand forces, and those segments have already outperformed tech in general. The most-impacted sectors of tech are the sectors that rely on direct consumer purchases. Credit-card interest has been rising and inflation has increased prices, thus reducing disposable income and making consumers more concerned about their budgets. Next on the list of impacted sectors are those that support the consumer sectors. The least-impacted sectors are those that invest on a long depreciation cycle, such as network operators and those that supply products that are typically deployed based on long-standing evolutionary planning, like data center elements.

This explains why Apple suffered more from the downturn than, say, Cisco. Apple sells primarily to consumers, and Cisco sells to businesses to support capital plans that look ahead for half a decade. Perturbations in the market will obviously have less effect on the latter than on the former, which suggests that companies like Cisco will likely see less impact on revenues in their next earnings report.

All in all, I think the Tech Dump of 2023 (or of 2022 into 2023) will end, but that doesn’t mean that tech won’t still have issues, both in the short and long term. In the short term, it will take time for spending that was reduced or deferred returns, because it will take time for inflation and interest rate changes to percolate through the economy and impact stock prices. Nobody is going to push up spending till their stock recovers, which probably means until their revenue recovers. That means the same negative feedback that drove the dump will also delay full recovery.

In the long term, tech is likely to remain a target for short-sellers because tech stocks tend to price in a presumption of growth that’s often a bit optimistic. That makes it easier for short-sellers to start a run on a stock. There are also fundamentals issues; we continue to over-hype new technologies, and thus not only overvalue companies but also risk under-investing in things that could really be important, even critical, to the tech markets in the longer term. In the end, though, the markets are driven by companies who have significant earnings growth potential, and it’s hard to see that outside the tech space. So…tech may be down but it’s not out.

How Far Might Open Fiber Access Take Us?

There is no question that the most expensive part of networking is the access piece. Move inward and you quickly reach a point where hundreds of thousands of customers can be aggregated onto a single set of resources, but out in access-land it’s literally every home or business for itself. Not only that, selling a given user broadband access means having the connection available to exploit quickly, which means building out infrastructure to “pass” potential customers.

The hottest residential broadband concept in the US is fixed wireless access (FWA), which addresses the access cost problem by using RF to the user, and with at least optional self-installation. In mobile services, operators have long shared tower real estate and in some cases even backhaul. Now, we’re seeing growth in “open” fiber access, where a fiber player deploys infrastructure that’s then shared by multiple operators. Is this a concept that could solve the demand density problem for fiber? Is it the way of the future? Let’s take a look.

In traditional fiber networks, and in fact in “traditional” access networks overall, it’s customary for the service provider to deploy the access infrastructure exclusively for its own use. That means that every competitor has to build out like it was a green field, despite the fact that there may already be competitors in the same area who have already done so. Open fiber access changes to something more like that shared-tower model of mobile services; a fiber provider deploys infrastructure and then leases access to multiple service providers. This doesn’t necessarily eliminate competitive overbuild, but it reduces the chance it would be necessary. It also reduces the barrier to market entry for new providers who perhaps have special relationships with a set of broadband prospects that could be exploited.

At this high level, open fiber access seems like a smart idea, and it might very well be just that. It might also be a total dead-end model, and it might even be a great idea in some places and a non-starter in others. The devil here is in the details, not only of the offering but of the geography.

If a given geography already has infrastructure that can support high-speed, high-quality broadband (which I’ll define as being at least 200/50 Mbps) then the value of open fiber access is limited because either fiber or a suitable alternative is already in place. The open fiber then becomes a competitor who’s overbuilding, which sort of defeats the reduce-overbuild argument’s value.

If there is no quality broadband option in an area, the question becomes one of demand density. A small community of twenty thousand, with perhaps five thousand households and several hundred business sites, might well not have a current quality broadband provider. A county that’s spread over a thousand square miles might have the same population, and also not have a quality provider. The first of our two population targets would likely be a viable opportunity providing that household income was high enough to make the service profitable, but the second target would simply require too much “pass cost” to even offer service, and the per-customer cost of fiber would be very high because of the separation of users.

OK, suppose that we are targeting that first community of under-served users. The next question is whether we can support the same community with our base-level 200/50 broadband using another technology, like FWA. In many cases that would be possible; what’s important is that the community be fairly concentrated (within a radius of perhaps two miles max) and that a small number (one or two) node locations with towers could achieve line-of-sight to all the target users. If FWA works it’s almost surely going to be cheaper than open fiber access, which means the latter would have a retail service cost disadvantage out of the box.

But here we also have to consider demand density, the economic value per mile of infrastructure based on available users and price tolerance. If demand density is high enough, then an alternative broadband option could still be profitable. Most of the areas where FWA is being deployed are already served by technologies like CATV, and where demand density is high enough it’s still profitable to deploy CATV. If you could reduce the cost of fiber through “federating” access across multiple operators, the net pass cost could be low enough to put fiber to the home (or curb) in the game.

The home/curb point is our next consideration. A positioning fiber deployment would “pass” homes, meaning it would make a node available at the curb, from which you could make connections. You still have to bring broadband into a home/business to sell broadband, and obviously you can’t do that until you actually sell a customer service. When you do, how does the cost of that drop get paid? Does the service provider who did the deal pay it, or does the open fiber access operator do the job? If it’s the former, how do you account for the cost if the customer changes providers? What if the first provider elects to use a limited-capacity path from node to home? If it’s the latter, the open fiber access provider has to bear what’s essentially installation costs. They also decide what the feed technology will be, which probably means it would be very high in capacity, as in fiber. That raises overall service costs, perhaps higher than some service providers targeting more budget-conscious users would accept.

Then there’s the question of whether there’s a potential link between open fiber access and “government” fiber. Any level of government could decide to deploy fiber and make it available to broadband service providers. That’s already being done in a few places, and it might eventually open up whole communities with marginal demand density to high-speed fiber broadband availability.

All of these questions pale into insignificance when you consider the last one, which is “Why would an open fiber access provider not become a broadband service provider?” This is the biggest cost of broadband, overall. You could deploy and share at a wholesale rate, or you could deploy and keep all the money. What’s the smart choice? Unless you have a target geography that for some reason is easiest to address via a bunch of specialized providers, each with their own current customer relationship to exploit, keeping all the money seems the best option. Even if it’s not the first option taken, does the open fiber access provider’s potential entry into the retail service market hang over every wholesale relationship? Eat thy customer may not be a good starting adage to live by, but as new revenue opportunities disappear, the old rules of the food chain fall by the wayside.

There a corollary question too, which is “Haven’t we invented the CLEC model again?” I noted in an earlier blog that requiring big cloud providers to wholesale capacity would create the same sort of model regulators created when they broke up the Bell System and required local operators to wholesale access. That was supposed to be a competitive revolution, but all it really did was create an arbitrage model instead of a true facility-based deployment model. That could happen here too, but so far we are seeing the open fiber access model bring fiber to places where it otherwise might not be deployed, and that’s a good thing.

Is “Cloud Dominance” the Same as Cloud Monopoly?

Is “dominance” the same as “monopolistic”? That’s a question that many regulators are wrestling with, and sometimes they’re also wrestling with lobbying and nationalism. In the UK, Ofcom (the UK regulator) has been looking at public cloud services, and recently referred Amazon and Microsoft to the investigatory body in the UK (the US equivalent of the FTC). One question this raises is whether the public cloud market, which depends on economy of scale, can be efficient if there aren’t only a small number of competitors. Another question is how a competitive market in the space could be promoted.

Cloud services are very much like network services in that they depend on earning revenue from an “infrastructure” that has to be deployed and operating before even selling services is possible. Telcos back fifty years ago used to draw a graph to illustrate this challenge. On Day One, the operator starts deploying infrastructure, and that process continues for a period of time before credible scope is reached. During that period, the graph falls negative because cash flow is negative. At some point, service sales kick in and investment falls off, and the curve flattens and then ascends, finally crossing the zero axis that represents neutral cash flow. It then continues into the profit phase. The negative part of the chart is called “first cost”, the outflow needed until the service begins to pay back in the net.

In the public cloud space, the “first cost” problem is exacerbated when there are established competitors in place while a new one is trying to gain traction. These competitors, having established customers and having paid their first costs already, are in a stronger position. They also have more infrastructure in most cases, which means at least somewhat better overall economy of scale. They have real estate and facilities established where markets demand they be located for efficient operation, too, and they understand both the operation of their gear and the marketing and sale of their services.

The “secondary” cloud providers that the referenced article cite are all players who had a viable non-cloud business and hoped to exploit their customer base with cloud services, which facilitates their entry into the market. However, none of these secondary players have really gained much market share. Amazon, Microsoft, and Google dominate the cloud.

Many market theorists would say that’s OK. The “optimum” number of competitors in a space has been calculated in many ways by many researchers, and the result usually turns out to be “Three!” Well, that’s how many major cloud players we have. So what’s Ofcom and the UK griping about? I think there is a touch of nationalism involved, which isn’t anything heinous, just perhaps unrealistic. The US tends to be a more risk-tolerant market for startup investments and also for the expansion of established players into new businesses. All three of our cloud giants exploited their own need for distributed hosting resources to get started in the public cloud space. It’s inevitable that where taking risks is better tolerated, more risks pay off.

The UK can’t turn back the clock. Could regulators there decide to somehow move against Amazon, Google, and Microsoft, or the biggest two of the three, which is what the current initiative seems to contemplate? Do they threaten to stop the big cloud providers from operating in their country? Do they force them to wholesale their services, or constrain the number of customers they sell to, or the number of new customers they accept? The challenge here is that any of these measures would almost surely fail, and would also almost surely hurt cloud users. Would they then encourage new players to step in? What happens when those players become “dominant”?

It seems to me that the smart move here would hearken back to the telco days of old. We had a unified set of telco services worldwide, with many different “competitors” in play. How did that come about, and seemingly work? I think the answer lies in three words, “Interconnect”, “Federation” and “Settlement.”

Regulators required that telcos “interconnect” their networks to provide services across them all. In many jurisdictions, they also required telcos to “federate”, meaning to offer their services to competitors who wanted a single point of contact but needed service coverage where no single provider could offer it. Finally, they required “settlement” among providers for services where one operator collected for a pan-provider service, so the other operator was compensated for the resources they contributed.

Could cloud providers have these same three requirements imposed? In theory, yes, though the legal basis for it might vary from country to country and in some cases might be linked to a consent decree imposed by the authority assigned to regulate monopolistic practices. In practice, the process could be a game-changer worldwide because it might eliminate a lot of problems users face today, like the challenges of supporting multi-cloud.

Could this help “competition”? Almost certainly, if one were to define “competition” as the entry of new giant competitors (like telcos and cable companies) who’ve stayed out of the public cloud services market up to now. All the second-tier cloud providers, as I’ve noted above, jumped into the cloud space by exploiting incumbent relationships and products. That means that it’s possible to start a specialized, targeted, cloud business. The problem is that you can’t achieve full geographic scale. Suppose that the big guys had to offer you services at a wholesale rate, which was retail less avoidable sales/marketing. That would allow smaller providers to leverage resource of the major players to build credibility. They could still eventually offer their own resources in those areas where they’d initially wholesaled, but only when the opportunities moved them enough on that first-cost “S” curve.

To make this work, though, you’d need to define a set of services that were subject to federated wholesale and standardize interfaces for them. Do regulators have the skill to do that, or even the authority? Perhaps most important, do they have the time? We can’t enter into another of those years-long “Where does your lap go when you stand up?” study adventures all to common in the standards world. If we could get through the process quickly, meaning if the UK and perhaps the EU are really prepared to push, we could in fact add competitors to public clouds. Would that help anything other than nationalistic pride? Not according to the data that says that three competitors is optimum, and we have more than that already. And for all the nationalistic pride at stake here, there’s no indication of major cloud investments by new players to enter the competitive fray.

That still leaves our first question, though. If enterprise cloud users have the option of using a set of “federated cloud” services brokered by a smaller player, versus a single unified service of a major player, would they pick the former? Not according to what enterprises tell me. These days in particular, with global economic stresses impacting almost every industry, enterprises want cloud providers that have the financial mass and technical credibility needed to stay the course. And would the smaller players, even if they then tried to selectively build out their own resources where wholesaling indicated opportunities existed, ever achieve reasonable economies in real estate, capital equipment, and operations?

Remember the CLEC craze of the 1970s, when regulators mandated telcos share access assets with others? I firmly believe that the requirement reduced and delayed competition in the access space, creating in its place a kind of retail arbitrage of wholesale relationships. It wasted a decade. We could do that again in the cloud, and waste another decade.

How Many of Those Metaverse Things Do We Have, Anyway?

OK, I guess it’s time to ask (and of course, try to answer) the question “How many metaverses are there, anyway?” It’s clear when you read about the metaverse concept, and also watch a growing number of metaverse-related commercials, that the answer is greater than one. How much greater, and what’s creating both the diversity of metaverses and the confusion over how many there are? Let’s get to the answer part and see what emerges.

The “original” metaverse was the invention of Meta, but only in the sense that Meta created an application for virtual reality technology that had been around for quite a while, particularly in gaming. What Meta envisioned was a virtual-reality successor to Facebook, a virtual world that was even more immersive, even more entertaining, and even more profitable. This metaverse, which I called the “social metaverse” is still IMHO what Meta sees, but the technology that it would depend on isn’t being revealed in detail.

We know from gaming that it’s possible to create a virtual world that can be represented in VR glasses. We know that in this virtual world, a player has a “character” or avatar, and that the behavior of the avatar is at least in part responsible for what the player sees. The player is represented, in short, by the avatar. We also know that multi-player games allow for multiple players with their own avatars, and that means that what each player sees is dependent not only on the behavior of their own avatar, but that of other players. Meta’s social metaverse, then, is an extension and expansion of this basic model, and that has both a business and technical challenge.

The business challenge is getting massive-scale buy-in by the same people that made Facebook a success. Early experiences have been disappointing to many, probably most, because they lack the kind of realism that any virtual world has to offer to be comfortable. Gaming, you may realize, is enormously resource-intensive at the GPU level, to the point where some advanced games can’t be played on anything but the most modern and powerful systems. You cannot have that sort of power as a condition of metaverse adoption or you’ll disqualify most users and all mobile users, yet without it you face the lack-of-realism problem.

That’s part of the technical problem, but not all of it. The other part is that a broad community like a social metaverse will likely have to present users with a virtual world that has to be synchronized to the behavior of users who are literally scattered to the ends of the earth. How is the composite view of a specific area of the metaverse, what I’ve been calling a “locale” constructed, given the differences in latency between that place and the location of each of the users? This challenge means that the “first cost” of metaverse deployment would likely have to be quite high even when the number of users was low.

Meta seems to have caught on to this, and their recent commercials have been emphasizing what’s almost a metaverse of one, or at least one of “not-very-many”. That takes out of the realm of social metaverse to what could be called an “educational” or “limited” metaverse. School children interact with woolly mammoths, doctors visualize a patent’s innards, and so forth. These applications are much easier to coordinate and implement, one reason being that you could assume that users were co-located and even that there might be some central “locale” processor that would do a lot of the heavy lifting on visualization, allowing client devices to be simpler. This is our “second metaverse”.

In parallel with this, we have our third metaverse, emerging from a totally different mission and set of backers. The “industrial metaverse” is something that’s intended not to create a virtual world but an accurate model of a part of the real world. In the industrial metaverse, the elements are not a mixture of a computer-generated place or places in which some user-controlled avatars interact, but rather a “digital twin”, a representation of real things. That elevates the question of how those things are synchronized to what they represent in the real world. I’ve had a number of conversations with vendors on the industrial metaverse, and a few with enterprises who realize that their “IoT applications” are necessarily creeping into the industrial metaverse space.

All of these metaverses have two common elements, the synchronization piece and the visualization piece. Sometimes visualization means exactly what the term usually connotes, as it would in the first two metaverses, and sometimes it means “exploiting” or “realizing” the model of the virtual world in some other way, like controlling a process or moving something rather than knowing it moved. Sometimes synchronization means modeling relatively simple behavioral movements of many users or a few users, and sometimes it means taking a complex manufacturing process and making a computer model of it. It’s been my view that this commonality means that we could consider all metaverses to be products of a common architecture, perhaps even of a common toolkit.

This is the thing I’m afraid we’re missing, the thing I think hurts us most if we do miss it. Three models, how many vendors and approaches, and where do we have a real, provable, opportunity? Talk about silo problems; metaverse could generate them in spades. And that doesn’t even consider what a unified metaverse might be essential for.

Do you like the idea of smart cities? Without a metaverse concept behind them, what we have is a bunch of autonomous cells and not a brain. Do you believe in robotics on a large scale? How will they behave in a real world that can’t be represented in a way AI can interpret? Speech to text, text to speech, chatbots, image processing, and a lot of other leading-edge stuff can only progress so far without having a means of using each technology within the context of a real-world model. I think we’re focusing too much on the virtual-reality part of the metaverse concept. Even Meta, trying to bring its own concept forward, is still obsessed with the metaverse as a virtual world for humans to inhabit. It is a virtual world, but the best and most valuable metaverse applications may not have any humans in it, and may not even require visualization.