On the Vanity of Tech Greatness

With apologies to Arthur Guiterman.

“The Elon Musk whose Twitter Flaw launched Mastodon…has rubbed us raw.

The metaverse that promised much…has crumbled into cartoon dust.

The 5G wave that swept our shores…became the focus of our snores.

All tech seems locked in hype and fear…and so we hope for a better year.”

Some New Podcast Capabilities for 2023

Starting in 2023 I’ll be adding some capabilities to our TMT Advisor podcast site, including scheduling Mastodon-based interaction on selected podcasts and adding a new podcast type. Please note that you’ll have to be registered to use the Mastodon capability and to listen to the new TechsRayTM podcasts. Those who are registered on January 10th will be able to participate in the test of the new capabilities, but no new subscribers will then be added until the test is complete, at which time I’ll catch up on the registrations of new subscribers.

Goodbye 2022, Hello 2023, and Beyond

We’re almost done with 2022, a year that a lot of people aren’t going to be sad to leave behind. The big question for us all is whether 2023 will be any better, and there are signs that point both upward and downward. The tech world is always a slave to the broad economy, but the two don’t move in lockstep so we have to look a bit deeper at the trend lines. To do that, I’ll draw on what I’ve learned from both buyers and vendors, and also on Street research.

Keep in mind that both these sources have potential issues. I don’t necessarily see a valid statistical sampling of viewpoints, because my contacts come from people who are likely following my views. Whether I influence them, or simply “pre-select” from the broad market because people who agree with my views are more likely to contribute to my information, there’s a selection bias at play. With regard to the Street, I’ve noted many times that there are things they’re good at (like whether budgets are going up or down) and things they’re terrible at (like almost everything relating to technology insights). With that in mind…

If we start with the enterprise, we see a convergence of views from both sources. My contacts tell me that budgets for 2023 are not under pressure, but it is possible that some projects will be pushed a bit toward the second half because of broader economic uncertainty. The Street view with regards to budgets is similar, but I don’t see a significant consensus on the project timing issue.

What is also consistent is the general focus of projects. There’s a lot of interest in vendor consolidation, which I see primarily at the hardware and platform software level but which the Street sees as a broader trend. The issue here is that enterprises think that integration of any sort has become a major problem, and they’re looking to reduce it by reducing the population of things that need to be integrated.

One insight I got from my contacts that the Street didn’t mention is that there’s a movement toward “full-stack” or suite vendors and away from the “best-of-breed” thinking. I think this will favor vendors like IBM/Red Hat, VMware, Dell, and HPE. All these players offer both hardware and platform software, and that’s looking increasingly attractive to enterprises. Part of the reason is that enterprises are reporting rapidly mounting difficulties in acquiring and retaining highly qualified technical personnel.

The highest priority for enterprises is the applications, though, not what they run on. There is still significant interest in using the cloud to improve engagement with customers and prospects, and also (increasingly) to support employees and suppliers. There’s also growing interest in using SaaS for horizontal applications, and in fact SaaS use is the largest reason for “multi-cloud”.

Things on the network side are significantly more muted. There is no budget increase on tap for network infrastructure among enterprises, only for security products. This is another area where my data matches that of Wall Street, and it’s hardly surprising, given that the role of the network isn’t changing and so major transformational investment would be hard to get past the CFO.

“Transformational investment” is a worthy topic in itself. My data has consistently shown that in both networking and IT, there’s less interest among enterprises in trying out new stuff than we’ve seen for at least a decade (in fact, since the downturn in 2008). New technologies, new vendors, new anything, is kind of out of favor, even with CIOs who are usually more interested in that sort of things than the rest of the C-suite. The Street has generally picked up on this point, too.

What the Street doesn’t validate specifically is the attitude enterprises have toward even identifying new technologies, and perhaps surprisingly, toward discovering new paradigms for the way that IT is applied to their businesses. “Stay the course” is the mindset, and part of the reason I’ve heard for this is that buyers have lost faith in many aspects of tech. The number who say that they trust the representations made by their current or prospective vendors has dropped by a third in the last five years. The number who trust media and advertising has been cut in half over the same period.

In the network operator space, things are way more complicated in some ways, and much simpler in others. Complicated because there are many different types of operators, and many different sized. Complicated because an operator organization is like a government; huge, monolithic in some ways and yet home to radically divided opinions. I described the latter phenomena as the young Turks versus the entrenched conservative management establishment, but in fact you could divide operator organizations up in three or four different ways, which shows just how hard it is to establish a perspective on “plans”.

Let’s cut through that, though, but admitting that senior management falls into the “entrenched and conservative” part of any division, and that this group is still driving the bus. If we do that, we can see some surprising congruencies with enterprise management.

The Stay the Course mindset is one. It may be that management thinks that all the current tech market confusion is something that will pass away without any need for them to act in any particular way. It may be that they don’t think that they can sit on their hands, but don’t know what to do when they stand up. Understandable, when you have asset useful-life expectations that are on the average from 50% to 100% longer than you’d see for the same assets in an enterprise deployment.

This attitude is prevalent in both vendor and technology selection, but in both areas there’s a bit more of a split view among operators. There are some who are eager to change both vendors and technologies, in order to better manage costs, but for most this attitude is confined to areas of the network (geographic or service segments) that are largely greenfield or are undergoing major modernization. 5G is a good example.

Cost management, in fact, is the only consistent driver I see among operators I’ve chatted with. There is really little interest in “new services” beyond making some largely inconsequential tweaks to existing services and calling the result “new”. That’s focused operators more and more on the cost side, but at the same time ensured a collision between their goal (reduce costs) and their technology intransigence (don’t change vendors or technologies radically). Part of that, of course, can be attributed to the long capital cycles operators have to deal with, but part is just the traditionalist bias of senior management.

There are some things we can expect to see changing in 2023, though, but in a more evolutionary way. For consumer services, we’re recognizing that faster/better Internet sells, even if the claims are vague. As a result, there’s interest in pushing fiber broadband further, even if it means that there are little broadband enclaves where demand density is high, surrounded by areas where broadband is relatively poor. There’s also interest, and rapidly growing interest, in using 5G in general and millimeter wave in particular, as an alternative to fiber/wireline.

Consumer broadband, the Internet, and the cloud, are now linked in a powerful symbiosis. Over the last two years, we’ve seen the value of engaging users online to promote, sell, and support products. That has led to a demand for richer online experiences, which has led to both a need for cloud-hosted front-end elements to applications and better broadband to deliver richer visualizations. As a result of this, business networking is being transformed, in a multi-dimensional sort of way.

One obvious dimension is that better Internet and consumer broadband will make better low-cost connectivity available in more and more areas where there’s a population concentration that raises demand density. Since those are the same areas where business sites are likely to be located, it’s pulling businesses toward Internet exploitation rather than specialized business Ethernet/fiber access and MPLS VPNs. Major sites aren’t likely to end up on consumer-driven broadband infrastructure, but any smaller sites will be better supported there, and so the migration of small sites toward consumer broadband is almost assured.

The other dimension is the cloud/Internet role in supporting application front-ends. If customers, partners, and (increasingly) workers are all supported via the Internet and the cloud, then “cloud networking” or the connection of data center sites with the cloud for hybridization, becomes the primary incremental enterprise network requirement. Even alternative VPN technologies like SD-WAN are more and more likely to be deployed as virtual-network SASE instances in the cloud than as branch appliances or services. If workers access their business applications through the cloud, then all you need in branch locations is an Internet connection and security tools.

All this adds up to the need to create a unified virtual-network model for enterprises. I think that the announcement made today by Red Hat and Juniper is an example of that; Juniper has both SD-WAN and general virtual-network technology (from their acquisition of 128 Technology and via Contrail, respectively), and the paper posted today is an example of how we might see networking transformed by the combination of the cloud, the Internet, and even consumer broadband.

Security is the final point here. We have been practicing incrementalism with respect to security, like a crowd of children with fingers at the ready to stick in holes in our dikes. It’s time we fix the leaks holistically, but that’s been difficult to do given that the evolution of the cloud front-end hybrid model has created new attack surfaces, that network connection options are evolving, and that vendors tend to introduce band-aid fixes rather than asking buyers to start their security considerations over again.

Years ago, I pointed out that security really has to start with session awareness, meaning that network services that can identify a specific user-to-application relationship and match it against what is permitted and what isn’t, should form the foundation of a security model. We are starting to see that, and starting to understand that some form of virtual networking, which elevates connection management above transport connectivity, is essential. For business services, then, virtual networking is the most important thing to consider in 2023.

We’re not going to see a network revolution, or a tech revolution overall, in 2023. Economic concerns and central bank tightening will ensure that growth questions will haunt some economic sectors through the year, and that will make both businesses and consumers a bit more cautious. What we will see, I think, is a number of evolutionary changes in our tech ecosystem converging to create some trends that are a bit more visible, and thus perhaps more addressable.

I’ve tried this year to present an objective view of tech, offending some along the way and perhaps exciting others a bit. I’m pledged to do the same thing in 2023. This is my last blog of 2022; I’ll resume after the first of the year. Meanwhile, have a Happy Holiday season!

Oracle’s Cloud is Hot; Can They Keep It Hot?

If you’re a fan of the cloud, which probably just about everyone is, then one big question that you’d probably like to have addressed is “Why, given that all the signs are that public cloud spending growth is dropping, did Oracle deliver 43% higher cloud revenue in the last quarter?” There have been a lot of comments made on this, mostly the kind of abstract junk we’re all used to hearing. Let’s see if we can offer something actually useful, maybe even insightful, on the topic.

First and foremost, Oracle has the advantage of small starting. A deal that would give Oracle a five or ten percent total revenue hike in one shot would be a pimple on the cloud revenues of the Big Three providers. Thus, we should not be asking whether Oracle is now on a trajectory to becoming one of the giants of the cloud, at least not at this point.

We can assume that Oracle is picking up deals that didn’t go to the Big Three. From my limited base of conversations with cloud users, I’m not seeing companies moving to Oracle’s cloud from of those cloud giants. Instead, what’s happening is that companies who had no cloud commitment have picked Oracle, and some who wanted to expand their cloud usage picked Oracle instead of one of those giants. The question our cloud fans should be asking, then, isn’t the one I posed at the opening of this blog, but rather “What’s different about Oracle Cloud Infrastructure (OCI) that’s attracted those customers? And no, it’s not some abstract factor like Ellison’s aftershave or his innovative vision. It’s something concrete.

I think that the number one technical differentiator Oracle has is that from day one, Oracle has been a PaaS or SaaS play rather than an IaaS, virtual machine, or “cloud native” play. Oracle has applications and extensive middleware to offer, and so while you could do traditional hosting on OCI, those who do have probably picked OCI for the SaaS/PaaS options available, and extended that commitment with stuff that presented more traditional hosting requirements.

Oracle got to that differentiator because Oracle has been a middleware and application supplier all along. Their database technology, for example, is one of the premier offerings in that area even for pure data center hosting. Many companies were Oracle customers before they’d even heard of the cloud.

That leads to the second differentiator, one that Oracle shares with IBM. Oracle has a hybrid cloud focus. They aren’t expecting everything to move to the cloud, but that the cloud will provide a home for applications and components whose resource usage model isn’t optimal for the data center. IBM, you’ll recall, also saw its earnings beat the rest of the IT players because of its express, even strident, hybrid-cloud focus. Apart from the fact that few users even entertain moving everything to the cloud, the effort that would be required to do that would be daunting in an age where skilled tech staff is hard to come by.

The truth is that the cloud and the data center are just different forms of hosting. In the 1950s companies started to use mainframes. In the 1970s they added minicomputers, and in the 1980s they added PCs. We still have mainframes, minicomputers (we’d call them “servers”) and PCs in use today. Every stage of IT evolution has built on, rather than replaced, what came before it. So it is, and will continue to be, with the cloud, and Oracle focuses on that truth. OK, given that they sell stuff into the data center, it’s not hard to see why they’d like to continue to do that and add cloud revenue to the mix, but still it’s smart positioning.

Another smart positioning move is that instead of just abstractly supporting “multi-cloud”, Oracle embraces specific relationships with competing cloud providers. On the leader page of the OCI website, they’re featuring a “multicloud with OCI and Azure” capability. And consider for a moment the choice of Microsoft’s Azure as the featured partner. Who, among the Big Three, is hottest in the enterprise market? Microsoft. Who has platform capabilities and data center credibility? Microsoft. While it might make sense to believe Oracle would rather team up with somebody like Amazon or Google rather than go with the cloud giant who has the most similar mindset to Oracle’s, I don’t think that’s the case. Microsoft’s prospects are the same people Oracle needs to draw upon. Many of Microsoft’s Azure customers are ripe for Oracle’s PaaS/SaaS contribution, in fact.

Because Oracle’s PaaS isn’t monolithic (think database, Java, and other apps) Oracle can introduce itself into even a hybrid cloud dominated by Microsoft Azure. They don’t need to replace it, so their sales process doesn’t face the major hurdle of jumping a user off into the unknown. Get your camel’s nose under the Azure tent and let nature take its course.

That raises the next area, which is that Oracle has been smart is in hitting them where they ain’t. They’ve weaved and bobbed into cloud opportunity areas where other providers haven’t bothered to go. Again, they have an advantage here in that because they’re relatively small, relatively small growth opportunities matter a lot. They’ve been aggressive in deploying alternate CPUs like ARM and GPUs, for example, and they’ve been particularly effective with the notion of a transportable database model and the leveraging of the near-universal Java platform.

Enterprises with Oracle contact tell me that Oracle also generally avoids direct competitive faceoffs. They don’t advocate replacing other cloud vendors, and even in competition with other vendors their first effort is to make a place for themselves rather than competitive counterpunching. Some say Oracle has accepted a piece of the pie rather than pushing for the whole pie, when doing the latter could lengthen the sales cycle. It might be just that the Oracle sales force (like most) wants short time-to-revenue, but I sense it’s more than that. Get in, leverage incumbency, grow…that seems to be the strategy.

I don’t think Oracle is a threat to the Big Three, other than perhaps to Google. Google needs a strategy to grow market share, given that it’s the bottom of the top three providers, and Oracle and IBM have tapped off what would have been the obvious enterprise opportunity. But Oracle could be a threat to IBM, for the obvious reason that their prospect base largely overlaps IBM’s but (thanks to its database and Java) is larger. Still, all the cloud providers need to be thinking about Oracle’s approach, since the pressure on cloud costs could eventually cause current cloud users to look for other options.

The big question, I think, is whether Oracle has looked deeper into what’s behind its own success than Wall Street has. If you have a winning formula, you need to understand what it is or you can’t be assured you can continue to win. Oracle has real opportunities here if they can execute on the right things.

Is the Metro a Natural Fit for Disaggregated Device Technology?

I’ve blogged a number of times on the importance of metro in edge computing, cloud computing, function hosting, and network evolution. Metro, of course, is primarily a geography, and second a natural place of concentration of traffic. Economically, it’s the best place to site a resource pool for new services and network features, because it’s deep enough to present economies of scale across a larger user population, and shallow enough to permit customization on a per-use basic.

In a vendor opportunity sense, metro is the brass ring. My model says that there are about a hundred thousand “metro” locations worldwide, and obviously a hundred thousand sites where we had at least mini-data-centers, server switching, and router on-ramps, not to mention whatever we needed for personalization, would be a huge opportunity. In fact, it would be the largest source of new server and network device opportunity in the market.

Traditional network vendors see this, of course. Juniper did a “Cloud Metro” announcement a couple years ago, and Cisco just announced a deal with T-Mobile for a 5G feature-host-capable 5G Core gateway that makes at least a vague promise it would be “leading to lower latency and advancing capabilities like edge computing.” The technology includes Cisco’s “cloud-native control plane” and a mixture of servers, switches, and routers. Not too different from a traditional data center, right?

Is that optimum, though? Is a “metro” a single class of device, a collection of devices, a reuse of current technology, an on-ramp to a new generation of technology? We really need to talk about how the new missions that drive metro deployment would impact the characteristics of the infrastructure and architecture that frame that deployment. As always, we’ll start at the top.

First and foremost, metro is where computing meets networking. We know that we need servers for the computing piece, and we know that we need both switching and routing for the network side. We may also need to host virtual-network components like SD-WAN instances, if we plan to use metro locations to bridge between virtual-SD-WAN and MPLS VPNs. Further, if we are planning to support https sessions to edge computing components, we’ll need to terminate those sessions somewhere.

The second thing we can say about metro-from-the-top is that metro is justified by edge computing, which in turn is justified by latency sensitivity. I do not believe that the mission of hosting virtual functions, whether it’s arising out of 5G deployment or through a broader use of NFV, will be enough. Everyone wants to find new applications that would drive network and cloud revenue, and those new applications would have to have a strong sensitivity to network latency to justify their deployment outward from current public-cloud hosting locations.

The third thing we can say is that metro traffic is likely to be more vertical than horizontal, which has an impact on the metro data center switching model. Latency-sensitive traffic is typically real-time event processing, and to have this processing divided into network-linked components hosted on different servers at the edge makes no sense. Think of the metro as a way station between the user and traditional cloud or enterprise data centers, a place where you do very time-sensitive and important things to an application that’s far more widely distributed.

Third, metro infrastructure must be readily extensible to meet growing requirements, without displacing existing gear. It’s impossible that any operator would deploy a metro center at the capacity needed for the long term, when there would be zero chance that capacity could be used immediately. You need to be able to start small, metro-wise, and grow as needed. You also need to avoid having to change out equipment to reach a higher capacity, and to change management practices radically.

The final thing we can say is that metro is very likely to be a partitioned resource. The 5G missions for metro, which might involve network slicing for MVNOs or service segregation, would at least benefit from if not require segregated metro resources. Some operators already have relationships with cloud providers that involve resource sharing, almost certainly in the metro, and many operators are at least considering that. Regulatory issues might compel the separation of “Internet” features/functions from those of basic communications.

You can see from the sum of these points that there’s a fundamental tension in metro architecture. On one hand, it’s always important to support efficient hosting and operations, so it would be helpful to have a single pool of resources and a single management framework. On the other hand, too much unification would compromise the requirement that we be able to partition resources. But if we were to build a metro infrastructure with discrete resources per mission, the result would be inefficient from a capital-equipment utilization perspective, and management complexity would be considerably higher.

A potential compromise could be achieved if we assumed that our metro deployment was connected using a cluster device rather than a single fabric or a traditional switch hierarchy. However, there are very few cluster implementations, and in any event you’d need specific features of such an implementation in order to meet the other requirements.

I’ve mentioned one cluster router/switch vendor, DriveNets, in some other blogs. The company launched in part because of AT&T’s desire to get a disaggregated open-model-hardware router, and it’s been gaining traction with other operators since. DriveNets offers three features that facilitate a metro deployment model, and none of these features are automatic properties of a cluster architecture, so we can’t be sure that other vendors (when and if they emerge, and tackle the space) will have them. Still, these features pose a baseline requirements set that anyone else will have to address to be credible.

First, you can divide the cluster into multiple router/switch instances through software control. That offers the optimum solution to our final requirement for metro. Each router instance has its own management system linkage, but the cluster as a whole is still managed by a kind of super-manager, through which the instances can be created, deleted, and resized.

Next, you can add white boxes and x64 servers to a cluster without taking the cluster down, if you need to add capacity. The maximum size of a cluster is roughly comparable to that of a big core router, and the minimum size is a nice entry point for a metro deployment. All the white boxes and servers are standardized, so they can be used in any cluster for any mission, which means that you can build metro and core switches, and even customer on-ramp routers, from the same set of spares.

Finally, you can deploy network features and functions directly on the cluster using integrated x64 devices. Everything is containerized so any containerized component could be hosted, and the result is very tightly coupled to the cluster, minimizing latency. Each hosted piece can be coupled to a specific router instance, which makes the feature hosting partitioned too.

I think cluster routing is a good solution for metro infrastructure, possibly the best solution available. Right now, DriveNets has an inside track in the space given their link to AT&T, their early success, and the maturity of their features beyond simply creating a router from a cluster of white boxes. I’ve not heard of specific metro successes for DriveNets yet, but I do know some operators are looking at the technology. Given that there’s not really any organized “metro deployment” at this point, the lack of current installations isn’t surprising. It will be interesting to see how this evolves in 2023, because I’m hearing more trials and maybe even metro deployments are likely to come.

It will also be interesting to see whether traditional vendors like Cisco and Juniper decide to step up with a metro package. 5G Core gateways are plausible paths toward metro, but one that’s a bit hard to recognize among the weeds of uncertain technology steps. “Cloud Metro” positioning is a commitment to metro needs a strong definition and specific technical elements. So does the cluster model, and while DriveNets has addressed a lot of the requirements they’ve not pushed the details either. If metro is important, it will be important in 2023, and so will the steps vendors take, or fail to take, to address it.

Some More Details on Mastodon Chatting on TMT Advisor Podcasts

As I’ve already noted in a prior post, I intend to provide an option to allow registered users of my blog.tmtadvisor.com podcast site to participate in scheduled chats about selected podcasts, using the open-source Mastodon application. This post on TMT Advisor provides more details, and full information will be available when the capability is activated after the first of the year.

Podcast: Things Past and Things to Come December 19 2022

We’re still facing economic uncertainties and stock market tumult, but our podcasters remain confident about the path forward in 2023. In the tech space, they talk about one particular telco project, and the vendor selected for it, and why this is important enough to break our tradition of not covering “wins”.

What’s Next for the Network Operators? It Depends.

There are probably few topics as important as where telcos are on their path to…well, maybe that’s the problem. What are they on the path to? Digital transformation? Virtualization? Telco cloud? Cloud native? We’re entering a new year shortly, a year that’s clearly going to be critical for the telcos worldwide. They are finally facing the end game in their profit-per-bit battle, something they hoped would never come and yet never really addressed. Wouldn’t it be nice if those telcos knew what end game they were playing?

You’ve all probably realized that I’ve been digging through all my interactions with the telcos and cable companies through the last couple of months, and writing about some of the things I’ve found. This blog addresses that question of the telcos’ path to the future, both in terms of what they think and what they should be thinking.

It’s hard to characterize the responses I’ve gotten from telcos to question about where they’re heading. With short of a hundred responses, there’s still a lot of fuzz and uncertainty. In fact, just last week I finally think I got a hint of the thing that links everything I heard. You can summarize it as follows: “Technology X, when I complete deployment, will address and eventually solve my problem.” Whatever “Technology X” is, and there are over a dozen suggestions, it’s the deus ex machina. Maybe they all have a different notion of what the “deus” piece is, but the “machina” part is clear; it’s business salvation.

You can see this is a retreat (yet again) into supply-side thinking. Those telcos have only a very vague notion of what specific thing their Technology X is going to do. Roughly half think it will somehow revolutionize costs, and the other half that it will be a source of new service revenues. The former group could probably be characterized as being the “digital transformation” group; I blogged about my concerns about both the term and the concept just last week. The latter group thinks that Technology X is 5G in general, or maybe 5G Core, or maybe network slicing, or maybe even 6G, and that’s the group we probably need to talk about.

If we go back maybe 40 years, we’d find that FCC data showed that consumers had spent roughly the same percentage of their disposable income on telecommunications for as far back as they had data. They used their phones more in proportion to the extent that using them cost less, which equalized their total cost. Today, they spend well more than triple that percentage. Why? Because the Internet came along and changed the value of telecommunications by taking “communications” out of it.

When the Internet started to gain traction, everyone dialed in. It was almost a decade before any form of consumer broadband was available, but it was the availability of consumer broadband that jumped telecommunications spending up. The key point here is that the value of the Internet pulled through the changes in networking. Would we have deployed consumer broadband had the worldwide web and dial-up modems not made it clear that there was money to be made in that deployment? I doubt it.

This excursion down memory lane (at least for those old enough to remember!) is a historical validation of the critical point that demand drives investment, not the other way around. Nobody in the 1980s was asserting that telecom should be funded because something might come along to justify that. The Internet grew by leveraging what was available, and at some point that proved that making more available would be profitable. By that standard, the applications that would drive mobile evolution and infrastructure investment, along with a good ROI on that investment, should already be in place and limping along without the support they really need. Where are they?

What actually drove the Internet revolution and the huge leap in spending on communications services was the worldwide web. You could use any PC, run the good old Netscape browser, and you had a global information resource. As a company or organization, there was a modest cost associated with hosting content for people to discover you and your information. We had analog lines, we had analog modems, we had something to use them for, and we had companies interested in exploiting all that. But most importantly, communications capabilities limited the utility of the web. It’s no wonder the web worked as a driver of new network service revenue.

Try this same approach today. Pick a technology; let’s use IoT as an example. We do, after all, have smart homes and buildings and we may be moving to smart cities. Wouldn’t something like 5G be the same kind of driver for new revenue? No, because of that last important point above; the IoT we have, and reasonable extensions to it, aren’t enhanced by new communications features. What we have is fine. The great majority of IoT is intra-facility. What goes outside isn’t the IoT itself, but notifications, and those have very modest communications demands.

Or how about edge computing? The problem there is that the value of edge computing is its ability to unlock cloud-like hosting for a class of applications that the cloud can’t run. But what are those applications? Industrial IoT might be called out, but those applications are already run on local servers and it’s difficult to see what enterprises would gain from moving them to a provider edge pool. If low latency is the goal, what could offer lower latency than a server pool local to the industrial process?

So there’s no hope? Not true, perhaps. I remain convinced that an expanded notion of the metaverse, one that focuses more on digital twinning and real-world modeling, could be a driver for edge computing and also a driver for enhanced network services. The problem is that we don’t have any coherent pathway to that approach, we don’t have any interest in it among the operators, and absent resources to host and connect it, there’s not much chance that third parties will rush out to invest in creating this essential precursor element set.

Before the Internet, there was no consumer data networking at all. What created the Internet wasn’t the standards, the fact that military/scientific applications had driven the development of the technical capability. It was that people found things they wanted to do. If we want data networking to progress, people will have to find new things.