The Dynamic of Two Tier Ones Show Wireline Directions

It’s always interesting, and even useful, to look at how AT&T and Verizon are doing in the broadband/wireline space. Verizon has led in new home broadband technologies with its early Fios push and now with fixed wireless (FWA), and AT&T has been much more aggressive in pursuing a position as a content provider. It’s common to pick one issue or the other and use it to justify favoring one operator or the other, but I think the truth is that both operators need to face both issues, and there are barriers to that happy state in play for them both. You can see some of these differences, and issues, in their quarterly reports.

To set the stage, AT&T and Verizon have very different wireline territories, a residue of the breakup of the Bell System and the Regional Bell Operating Companies. AT&T also merged with/acquired Bell South, which added more (and different) geography, where Verizon didn’t add a lot of geography. That’s likely because Verizon’s territory is much more economically dense, which means that wireline infrastructure is more likely to be profitable for them. The two companies have followed different paths because they have this basic difference of what I’ve been calling “demand density”, which controls the ROI on access infrastructure. Verizon’s is good, but AT&T’s is much lower.

AT&T talks constantly about ways they’re saving on capex and opex, obviously because they need to boost their infrastructure ROI. They’ve been the most aggressive of all the Tier One operators in adopting open-model technology, changing their network, using smaller vendors, you name it. That’s not going to stop, and in fact I think it’s likely to ramp up over the next two years because of Wall Street concerns about their ability to sustain their dividend.

Cost management, as I’ve said in the past, is essentially a transitional strategy. It vanishes to a point; you can’t keep relying on it because it’s impossible to lower costs indefinitely. At some point you have to get top-line revenue growth, and AT&T has recognized that with its various media/content moves, the latest of which is the WarnerMedia move. Unlike AT&T’s cost management strategies, though, it’s media deals haven’t paid off for them so far (which of course is what’s behind the DirecTV and WarnerMedia stuff).

Verizon, as I’ve noted, has the advantage of a geography whose demand density matches those of traditionally broadband-rich countries. With a much higher potential ROI on infrastructure investment, they’ve had little pressure for aggressive cost management, and similarly little pressure to acquire content companies. Their demand density has not only contributed to their aggressive Fios fiber plans, but also to their taking an early and strong position with 5G mm-wave (5G/FTTN) technology.

Generally, high demand density not only means dense urban areas, but wide-ranging and rich suburbs. That makes it much easier to deploy mm-wave technology where you can’t quite justify fiber. In effect, the demand density gradient from most dense (urban) to least (rural) is less pronounced for Verizon. They also have a competitive consideration; the biggest cable company (Comcast) is a major player in their region and CATV has a favorable pass cost, much lower than fiber. A mm-wave position is a way of keeping the Comcast wolf from the door.

Demand density also helps in the mobile space. Operators with wireline territories tend to do much better with mobile services within that territory, even if they offer broader mobile geography. Verizon always does well in national surveys of mobile broadband quality, and their success with 5G mobile service raises the chances that they’ll eventually offer 5G broadband as a wireline alternative in rural areas where mm-wave isn’t effective.

Given profitable wireline broadband, there’s less pressure on Verizon to embrace its own streaming. They do have Fios TV in linear form, and they offer a streaming box, but it supports other streaming services. Verizon sold off Verizon Media, which was not “content” in the sense of AT&T’s deals, and that suggests strongly that they’re not in the market for video/content assets. I don’t think that’s going to change, because they can make a go of their wireless and wireline businesses.

Right now, AT&T and Verizon are perhaps more competing on strategy than on sales, given that for network operators, the planning horizon is very long. Verizon is a network player, and AT&T is more and more like Comcast, a broadband-and-media player…or they’re trying to be. Not only is there a strategy difference today, there may well be a bigger one emerging…the edge.

Edge computing is really mostly about metro deployment of compute assets. Yes, there are stories about putting compute resources way out toward the real edge, but that’s mostly associated with function hosting for 5G. If you’re really going to sell edge computing on any scale, you need a population of prospective buyers who are packed into an area compact enough that latency can be controlled within it.

Think for a moment about the economic realities of edge computing. An investment in “telco cloud” by an operator is very much like an investment in wireline broadband infrastructure. You plop down a resource and it has, in its spot, a market radius it can serve. The more prospective revenue lives within that market radius, the more profitable any investment will be. More profit means lower risk, a larger bet you can make. If an edge model emerges, it will be far easier for Verizon to realize it because they have extraordinarily dense metro zones spread throughout their wireline footprint.

For AT&T, edge computing can offer comparable returns to Verizon’s only in some major metro areas; their population is much more spread out. They have a lot of smaller cities that are going to present a challenge in terms of profitable edge deployment. I think AT&T may see public cloud partnerships in 5G hosting as less of a long-term risk because they can’t yet see a long-term option for deploying their own cloud. Why not grab some benefits from a partnership when no roll-your-own is on the horizon?

This presents a challenge and an opportunity for vendors. What would a profitable metro model look like? How could you evolve to it from your current position in both 5G and metro networking? Would it scale to smaller cities, and could you network metros together, both to improve the breadth of latency-sensitive services and to better serve communities whose “edge” doesn’t touch enough dollars to permit deployment of a pool of hosting with economy of scale?

The cloud, and the network, are evolving. The players in both spaces are demonstrating that they understand that, and they’re also struggling to understand how to deal with it. The way this critical technology fusion works out will decide the future of a lot of services and a lot of vendors.

IBM Faces a Very Important Choice for Cloud

IBM’s quarter was disappointing to most on Wall Street, their stock declining sharply with their announcement last week. Red Hat managed to post 17% growth, but IBM’s own products showed a decline. Only its consulting unit showed momentum, with an 11% gain. There’s still Street hope for a hybrid-cloud-driven advance overall, but I think that there are clear signs that IBM faces significant pressure to make the Red Hat acquisition a part of an overall company plan. That the pressure comes from multiple directions only makes things harder.

One example of this multidimensional pressure is hybrid cloud itself. We are only now starting to see broad understanding that in a “hybrid cloud” there’s a shift of application modernization effort (and budget) to building cloud front-ends to existing applications. This shift, by its very nature, tends to advance cloud spending at the expense of advances in data center spending. IBM’s own products and services are aimed largely at the data center, so it’s not surprising that a Red Hat win is at least somewhat linked to an IBM loss. That’s how project budgets are leading things.

Another problem is that the easiest way for IBM to recoup some of this budget-shift loss would be to post a bunch of wins for IBM’s own cloud. However, IBM is by most metrics fourth in the public cloud market, and Red Hat’s success is likely tied closely to being cloud-agnostic. The current numbers could be argued to show that IBM cannot win with its own products and win with Red Hat at the same time, because the cloud-or-data-center budget inverse dynamic can’t be reversed with an IBM cloud success.

The third problem is the usual problem with big M&A, which is culture shock. I’ve got contacts with both IBM and Red Hat, and it’s interesting that both see the situation in the same way, a kind of radical open-source geeks trying to cohabit with people whose architectural models are a couple decades old. Buying Red Hat was more than just smart for IBM, it was essential, but making sense of the acquisition is obviously difficult. The Street has always reported on the two companies as though they were casually dating rather than joined at the hip…because that’s what’s really true.

Hybrid cloud is likely the best model that IBM could have come up with to form the basis for unifying the fundamentally different worldviews. IBM’s strength lies in the commitment of its big customers to mainframe-model computing and the associated software. Red Hat’s strength lies in the cloud, and in the broader open-source movement. Successful hybrid bridging would require not only a way of advancing both cloud and data center, but also in making IBM’s own products relevant to the broader Red Hat opportunity base, which is made up mostly of companies that IBM has no influence with today.

The biggest technical challenge for IBM is defining that “way of advancing both cloud and data center”. Part of the problem lies in the way that cloud services are priced, a mechanism that by charging for data transport across a cloud boundary, tends to make cloud and data center distinct application domains with a minimalist linkage. Today’s enterprise use of the cloud is dominated by “front-end” missions where the cloud provides an elastic user-experience-oriented element that’s loosely coupled to transactional applications in the data center. This approach has proved highly responsive to business needs as they migrate toward a more customer-centric approach to sales and support, but it limits the symbiosis between the two pieces.

IBM, as a cloud provider, has the theoretical option of dropping those transit charges, but for the cloud providers, that decision would raise their network traffic levels sharply while eliminating the revenue that previously justified (and transported) that traffic gain. However, companies like Cloudflare are already pushing for transit-cost immunity, and if it were to catch on for the Big Three in cloud, IBM would surely be forced to adopt the no-transit-cost model even if it didn’t help create cloud/data-center symbiosis.

It wouldn’t eliminate the challenge of that symbiosis, either. Even if we suddenly erased the data-movement-cost barriers between data center and cloud, we still have other issues.

The first of these issues is the risk that the cloud, freed of those data-movement costs, would absorb more data center spending. Since IBM doesn’t dominate the cloud (by far), they would risk putting some of their data center revenues on the line for transfer to their competitors in the form of cloud charges. I don’t think that the elimination of transit charges would result in the cloud taking over entirely (as some believe), but I do think that more transaction edge processing would likely migrate out to the cloud.

The second issue is the question of (yes, my favorite factor!) architecture. How do we create what’s essentially a third hosting zone, one that both cloud and data center could support in some agile way? There’s a financial dimension to the issue; if the cloud supports elastic resource allocation that approaches or achieves usage pricing, how does the data center respond? Do we end up with “IT-as-a-service” even in the data center, where vendors have to sell their hosting and platform software on a usage-price basis?

What we’re looking at, I think, is the ultimate form of platform-as-a-service, where there’s a virtual middle-zone platform that perhaps looks a bit like the offspring of a cloud and data center dalliance. This platform, which could be VM-like, container-hosting, or both, would then map to either cloud or data center resources, making it easy to develop something that runs in the middle zone no matter where the cloud-to-data-center boundary falls for that zone.

This shouldn’t be a big deal for either IBM or Red Hat to develop, but I have to wonder whether IBM even sees the issue, much less is prepared to take a step in a novel direction to resolve it. So far, their quarter suggests that they’re hoping that their consulting services will provide the cloud-to-data-center glue, and I don’t think that’s going to work.

Companies often make a mess of M&A, mostly when they try to create a forced symbiosis between the buyer and the acquired company. So far, IBM hasn’t forced the fit, but they’re now at a point where they either have to create some harmony or accept that Red Hat is only going to contribute revenue, not create an enhanced and merged technology set. The Street won’t like the former choice in the near term, and IBM is bucking the odds if they bet on the latter, unless they work out that boundary architecture, and quickly.

A Tale of Three Clouds

Microsoft’s cloud revenue was up 36% this quarter, Amazon’s was up 40%, and Google’s was up 45%. Obviously the cloud is doing well, and obviously Google is doing unusually well, measured by revenue growth. However, there’s a lot more to the cloud story, and what’s there could be very, very, important.

The problem with raw revenue growth as a measure of cloud success is that it presumes that everyone is targeting the same market, and that everyone is starting from a fairly equal base. If a cloud provider has significantly lower revenue, even small dollar gains will translate to big percentage gains, and since Google is the bottom of the cloud Top Three, that’s surely a factor here. If a cloud provider is targeting more startups than enterprises, as Amazon has consistently done, then they’ll do better when there’s a wave of interest in some new cloud-hosted application. That’s a factor here, too.

Amazon is the largest of the three providers, and has the advantage of name recognition, maturity of offerings, and both geographic and hosting scale. Their biggest negative is that they’ve been successful enough in their relationship with OTT companies that they’ve not pushed as hard in hybrid cloud as rival Microsoft. That’s limited their participation in the enterprise cloud market.

Microsoft, number two, has focused on the enterprise from the start, and so they benefited as companies turned to building cloud front-ends to legacy applications to respond to greater need for direct, online, sales and support. They’re often seen as the logical alternative to Amazon for enterprises, but much less so as a partner for OTT firms. Their Teams, LinkedIn, and Microsoft 365 offerings have been successful and profitable, and improve their overall cloud economy of scale.

Google has the best cloud technology out there, and they also have the potential to achieve both economy of scale and geographic scope that would at least match and very possibly exceed that of either of their rivals. They’ve also taken steps, in the most recent quarter, to eliminate some technical disadvantages they’ve suffered in the past, and most significantly, to tap into Microsoft’s enterprise opportunity base. Their cloud is built on much the same technology as their vast search and advertising base, giving them the potential for that scale/scope push I’ve noted.

Where does this leave us with respect to the future of these three giants, and of the cloud? I think that there are two forces/factors that are driving cloud computing. The first is the scope of the hybrid cloud opportunity, and the second is the edge computing opportunity. Both these opportunities are already in the sights of these three cloud providers, but neither are really locked up yet. There’s at least one decisive force that’s yet to be played out in both opportunities, and it’s not clear who sees that, much less who will exploit it best.

Hybrid cloud, despite Microsoft’s initial platform-as-a-service centricity, is really about creating a loose coupling between a cloud front-end GUI-centric component and legacy transaction processing. The truth is that this model of hybrid cloud doesn’t favor Microsoft because they really don’t have a data center incumbency. IBM, who’s been whiffing their swings on hybrid cloud despite having the best account position to control it, has all the right credentials for hybrid.

But if IBM whiffs, then who benefits? Answer, mostly Amazon and perhaps (with the right positioning) Google. Hybrid cloud, absent some specific framework (middleware, PaaS, or whatever you’d like to call it) to unify the zone between cloud and data center, is really making the cloud into a very sophisticated and loosely coupled GUI. Amazon’s social-media role means it has the tools to create “social front-end” technology. So does Google’s own search/ad business.

If the hybrid cloud remains a GUI glued on top of legacy applications, then the dynamic of the opportunity won’t change much over the next couple of years. All three providers would likely have a shot at the growing enterprise dependence on the cloud, and I’d expect to see all three benefit to a comparable degree from growth in enterprise hybrid cloud spending. Since I don’t think that it would be easy for any of the top three cloud providers to change the dynamic of the hybrid cloud opportunity, I don’t think that a direct assault on hybrid cloud will change the market in the near term.

How about an indirect assault? Edge computing, our second opportunity area, is emerging as an opportunity in itself, but also as an element in hybrid cloud.

The obvious question about “edge computing” is “Where’s the edge?” Since we’ve accepted the truth that edge computing is justified by applications that have significant latency sensitivity, the location of the edge can’t be any further out toward the source of events (which means the facility in which events are generated), nor further inward than the metro area where those facilities are found. The former position is what creates the symbiosis between hybrid cloud and edge computing, and the latter is what creates a new cloud revenue opportunity.

The great majority of edge applications relate to IoT, or M2M if you prefer. Because these applications represent a bridge between real-world activity and applications, they have to stay in sync with the real world or risk irrelevance at best or actual danger to people and property at worst. What the cloud providers have done is create extensions between cloud and premises edge, giving them a way of capturing the latter opportunity even before they’ve deployed any specialized edge computing resources within the cloud itself. Doing that requires a place to put them, of course, so there’s likely to be a delay in realizing any cloud-edge opportunity.

The non-user side of the edge is perhaps even more problematic. To bet on IoT to drive cloud-edge opportunity is to go beyond recklessness, financially speaking. That’s why there’s so much interest in telco cloud among the cloud providers (they were a major presence at the recent MWC in California). If 5G deployment, including O-RAN, creates an appetite for edge computing, and if it continues to drive partnerships between telcos and cloud providers, then might those deals culminate in real-estate-sharing practices? Some already have, and those practices could thus position assets that could eventually be harnessed for other edge applications, including IoT.

Even gaming and Facebook’s metaverse hopes could be incentives to drive edge deployment, but IoT, gaming, and metaverse all require an architectural model that also can be back-fitted to 5G in order to provide a safe balance between investment and return. Otherwise, new applications could be stalled by a lack of compatible facilities. Who provides those models? It could be a network vendor, a cloud software provider, a cloud provider, or maybe someone from left field; only time will tell.

Are Telcos Being Left Behind or Squashed?

We’ve heard nothing but doom and gloom for telcos in the race to relevance, and two Light Reading stories frame the latest round. The first story says that the digital transformation race is leaving telcos behind, and the second story that the big tech elephant is squashing telecom. The combination presents a view that the telcos are becoming the dumb-pipe players that many had predicted, some had feared.

Back in 2004, I participated in a mock presidential debate on election day, with the topic being “smart networks or dumb networks”. I had the smart-network side, and I won the debate, which I think proves that there was a compelling story to be told on the topic even then. The fact that we’re still declaring dumb networks the winner in reality in 2021 is proof that the telcos didn’t tell it, or perhaps even want to tell it. Or perhaps couldn’t tell it for reasons out of their control.

Digital transformation isn’t leaving telcos behind as much as it’s taking place above them. Birds don’t replace moles (yes, an unglamorous comparison, but one I think is fair) because they’re not in the same ecosystemic slot. We need moles, glamour deficit relative to birds notwithstanding. Not everyone can be over-the-top, that position requires somebody be at the bottom. Connectivity is the foundation of today’s advanced services, as it was the foundation for other network services in the past. Should telcos flee that to be digitally transformed? Who steps in?

There are a lot of reasons why telcos aren’t innovative at the service level, and things like a truly fossilized culture are surely among them, but there are also hard economic realities in play, and some regulatory positions that many hold dear.

Imagine two competitors in some unnamed higher-level network service. One competitor offers only that service, and uses the network connectivity the other provides. The other is the provider of that connectivity, and adds the higher-level service to their repertoire. Connectivity services have been suffering a profit-per-bit slump for at least one and likely two decades, which means that ROI on those services has declined to the point where it’s below what’s required for most vertical markets to sustain share price. Our second competitor likely has to invest most of its capital at this low return, where our first competitor simply pays commodity prices. Who wins this war? Unless you’re into fantasy, you know the answer to that one.

There’s a strong argument that the reason for the declining profit per bit picture is the Internet that created those higher-level services in the first place. One reason the Internet was so successful was that the entire Internet was based on three principles—unlimited usage pricing, all-provider peering, and bill-and-keep. No matter how much traffic a user generates or absorbs, their price doesn’t change. All ISPs are interconnected, so being on any one will give you access to everyone on the Internet, and each provider bills their own customers and keeps all the money. This made Internet startups easier to finance, but it also eroded the business model of that telco bottom layer. We call all these principles a part of “net neutrality”.

I’ve opposed bill and keep for decades, but I’m not going to reprise that point here. My point is that the situation the telcos are in make it difficult for them to rise above the connection layer, and make it difficult to make the connection layer profitable. Thus, any higher-layer service would have to subsidize the ROI losses the telcos incur in connectivity, and competitors without those connectivity services would have no such need.

This doesn’t mean that telcos couldn’t win in higher-level services, only that they couldn’t win in those services that depended only on widespread, open, zero-usage, connectivity. The mistake that the telcos have made is not realizing that there were other services that perhaps weren’t “higher-level” but “parallel-level”.

Let’s go back to a point I made earlier. “This made Internet startups easier to finance….” Venture funding launched what we know as the Internet just as much or more as the web standards presented by Tim Berners-Lee. In a sense, VCs were parasites on the telco host, and asking telcos to jump into “the Internet” at the higher level was asking them to self-parasite. But are all possible new services VC ticks on a telco body? No. The term “over-the-top” means “on the Internet, above the connection layer”. Could there be services parallel to the connection layer?

I contend that there is, and that IoT is an example. If we were to look back to the first vision of the “Internet of Things”, it’s clear that it depended on a connection layer, but also a huge community of “things”. These early visions didn’t assume that IoT was nothing more than home thermostats with WiFi access, they assumed that a sensor population, open and available for exploitation just as the connection layer was exploited would create another OTT-like revolution. It might, too, but VCs would never fund that.

Nobody VC would have funded the Bell System either, which is why telecommunications started as a regulated monopoly. But we have quasi-utilities today with a lot of free cash flow and a very low internal rate of return, meaning they could accept projects that required a lot of capital (“first cost”) and had a low ROI. We call them “telcos”.

Telcos could have turned IoT from a collection of sensors, systems, and connections, into a global service of awareness, just as they converted twisted pairs and plug panels and switches into a service of communications. They could have then either offered their own IoT services, wholesaled their service of awareness to others, or both. Instead, they did everything in their power to turn a service-of-awareness opportunity into another doomed-to-commoditization connection service. 5G, says the telcos, supports IoT because it lets us sell 5G services to sensors and not just humans.

This was, and is, the Great Telco Blunder. They could have launched a whole new industry, and they not only let is pass them by, they actively ran from it. Not from the classic OTT opportunity; they never had such an opportunity. From the service of awareness opportunity, which nobody had and which they were uniquely positioned to exploit. And now, not surprisingly, we’re on the edge of “too-late” for that opportunity.

Amazon Sidewalk isn’t exactly a revolution. It’s a way of creating a kind of household federation of WiFi access to allow home-related telemetry to use adjacent WiFi networks where the home network doesn’t reach. What it might revolutionize is the service of awareness. If all of the individually deployed IoT sensors could be collected in a vast, organized, secured federation, we could end up with the mass sensor-network deployment the telcos could have initiated, and didn’t.

The telcos aren’t falling behind in the digital transformation race, they’re not even running. They’re not getting squashed by big tech, either. In one sense, they’re performing a role nobody wants (maybe not even them) and in another they’re throwing themselves under every bus that big tech presents. But the big story, the big truth, is that the problem isn’t over-the-top, it’s by-the-side, and they’re about to lose the biggest, and perhaps last, opportunity they have to be a winner.

VMware Ups Its Telco Cloud Game

VMware is stepping up in telco cloud. There’s no question that the company is drawing its positioning line in the sand, and there’s no question that it has the market position to make that line into a promise to buyers and a threat to competitors. The only question is how the moves will balance out; how they will direct VMware along what’s become a pretty tangled path. The biggest issues are the fact that “telco cloud” has become more than just hosting network functions, and that partnerships between public cloud providers and telcos threaten whether there’s really any “telco cloud” on the horizon at all. In short, the telco market has been a minefield for cloud vendors, and so VMware has a narrow path to navigate if it wants to succeed.

The most important thing about “telco cloud” is its uncertainty. So far, telcos have shown relatively little interest in deploying their own cloud infrastructure, and many have in fact fobbed off the mission to public cloud partnerships. It’s also clear that vendors like VMware would derive relatively little benefit from a “telco cloud” that was confined to a single application like network function hosting, even if there were independent deployments by telcos. Telco cloud is really edge, and so we have to look at everything that targets it in the light of its impact on (and how it’s impacted by) edge computing.

The key element of the VMware announcement is the VMware-provided Radio/RAN Intelligent Controller or RIC. The RIC is the key component of an O-RAN implementation, the thing that differentiates O-RAN from the more monolithic 3GPP 5G RAN model. For VMware to deploy its own version is a big deal; it gives them not only a differentiator relative to firms who use other RICs, but also an opportunity to tie the RIC, and O-RAN, more tightly to their cloud software. It seems clear, based on this, that VMware sees (correctly) that 5G will be an early driver for deployment of hosting resources at the edge, and thus is likely to be the first credible driver for edge computing deployment.

The thing about O-RAN, though, is that unless it does lead to edge computing, it becomes nothing more than an open model for hosting some arcane 5G RAN elements. If, on the other hand, it could define and jump-start edge deployment, it could be market-changing. Even so, O-RAN-centric positioning is just one of three possible ways of approaching the edge, so let’s look briefly at the other two in order to compare their effects, and see whether other models might threaten VMware.

The second edge model is the cloud-extension model, which says that edge computing is cloud computing hosted at the metro level rather than in regional data centers. The reason for the metro extension is to control latency, so edge computing is low-latency cloud computing based on this model.

The third edge model is the metro fabric model. This is the edge approach that most network equipment vendors give a nod to. Because edge computing is where higher-level network functions are hosted, edge locations need to have a different kind of networking, one less focused on simple aggregation and more on meshing of service components.

While I believe that VMware’s announcement (referenced above) demonstrates that it’s in the 5G-and-virtual-function-hosting camp with regard to the edge, it also seems to work hard to cover the other models. VMware Telco Cloud Platform-Public Cloud provides integration with public cloud services, and Telco Cloud Platform-Edge accommodates the metro-fabric approach by being largely network-agnostic. That makes the third of our models appear to be VMware’s biggest risk, but the biggest risk to VMware and everyone else is that what emerges in the end is a combination of the three models.

Back when Juniper announced its Cloud Metro strategy earlier this year, I blogged about what the future edge/metro should look like (as a prelude to assessing Juniper’s announcement). In effect, it’s a virtual data center made up of metro host points connected in a low-latency mesh, combined with the architectural tools to manage the infrastructure, deploy edge applications based on some architectural framework, and manage the resulting conglomerate. Metro/edge rebuilds networks around the role that metro necessarily plays in enhanced service hosting, and service features. It’s this vision that VMware and others somehow have to support.

You can see the three models of edge emerging in my virtual-data-center definition, and you can also perhaps see why 5G is a complicated driver for edge computing. The vendors like Ericsson and Nokia can offer a complete 5G strategy, but they have to reference external specifications and vendor/product combinations to expand this to edge computing. There are no convincing external specifications, no edge architectures like there is O-RAN for 5G. Vendors like VMware can reference O-RAN and describe edge strategy (as they do in the announcement) but they leave the network piece open, which means that network vendors could directly aim at the VMware position, and that any player could align with a network vendor, or define a metro network model.

What would VMware have to do to defend its Telco Cloud strategy? The VMware blog post I reference above lays out the elements of their strategy fairly well, but it doesn’t position it well, so that would be a good starting point. Most vendors who target the telecom space have a tendency to sell rather than market in their material. There’s nothing about benefits, value propositions. It’s about speeds and feeds, or at least about functional components. This reflects a view that the buyer community (the telcos) are actively seeking the solution the vendor is offering, and only needs to understand the pieces in the offering.

In the case of telco edge computing, I’d argue that events disprove this position. Telcos are actively seeking, at least for the moment, to avoid edge computing in the form of telco cloud deployments. They hope to realize 5G hosting through the public cloud providers, as I’ve already noted. My work with both telcos and enterprises over the years has shown me that successful sales is a matter of managing the trajectory that defines how “suspects” become “prospects” and then “customers” in turn. It’s tough to do that if you don’t address what those “suspects” are actually doing.

It wouldn’t be difficult for VMware to position their stuff right, to manage that critical trajectory, but while I can say that with a fair amount of confidence, I have to admit that the lack of difficulty hasn’t enabled other vendors to do much better. There seems to be a sell-side bias toward under-positioning and failing to consider the suspect-to-customer evolution.

What’s going to be more difficult for VMware is that its credibility depends on platform dominance, and it’s not really articulating an edge platform strategy. 5G hosting could pull through the edge providing that the edge architecture supported both future applications like IoT, and 5G. Right now, VMware is essentially saying that edge hosting is the same as cloud hosting, which if true takes a big step toward invalidating their own value proposition. If it’s not true, then it’s incumbent on VMware to explain what edge hosting really is.

VMware is, IMHO, the best edge-opportunity-positioned of the cloud-software players. They have great assets, and great insights. There’s a big opportunity in the metro/edge space and they seem determined to grab a major piece of it. We’ll watch to see how they do in 2023.

Comcast’s XClass TV Might Be a Game Changer

Streaming is the current revolution in video delivery. OK, it may get its biggest boost from the fact that it can deliver video to smartphones and other devices over any good Internet connection, but it also has competitive benefits…and risks. Comcast, the biggest cable operator in the US, may be taking the competitive gloves off with its XClass TV, and that may elevate the competitive impact of streaming on market importance, maybe even higher than the impact of mobile delivery.

Video is perhaps the most important consumer entertainment service, and its roots lie in “cable TV”, which is dominated by live programming with optional recording of shows for time-shifted viewing. Because the original model depended on “linear RF” delivery, video has tended to require specialized media—CATV or fiber cabling. That model doesn’t work for mobile service, and it obviously limits the scope of video service to places where an operator could expect enough customers to deploy wireline infrastructure.

The streaming video model, by delivering video over the Internet, opened up video access to anyone with high-quality broadband. Yes, that meant mobile devices, but it also means “out-of-region” options for the previously regionally-constrained players, like Comcast. AT&T had previously taken a shot at streaming video, and they’ve revamped their streaming strategy, but it didn’t seem to convince wireline video competitors that they also needed to think about their own streaming strategy. Comcast may do that.

A shift to a streaming model would, of course, favor streaming providers, and that’s why competitive pressure created by moves like Comcast’s could kill off linear TV, at least in its CATV/fiber form. Over-the-air programming in major metro areas isn’t likely to be impacted in the near term, though there would be benefits in salvaging the spectrum used for TV broadcast. The problem is that many urban viewers rely on over-the-air, and it’s even popular in some suburbs. There’s a public policy question that would have to be addressed to eliminate linear TV, and I don’t think that will happen soon. However, we could expect other impacts.

One likely possible impact may sound surprising; a shift to streaming video would likely promote municipal fiber. Home broadband needs some strategy for live TV, and linear is not only too expensive an option for most muni projects to handle, the licensing of the programming would be prohibitive. If we had a growing number of national live-stream-TV players, then governments that wanted to offer fiber could be sure that their citizens could have their TV. It’s even possible that they could cut deals with providers for a lower price, perhaps to help offset deployment costs.

But the second impact could complicate things. We’re already seeing networks field their own streaming services, and that could ultimately lead to their reluctance to let their programming be incorporated in a multi-network streaming service of the type that we’re familiar with these days. This could end up either driving up the cost of those cable-like-streaming players’ services, or even driving some of them out of the market. The cost of signing up for every network’s streaming service would likely be higher than today’s multi-network service cost.

The biggest impact we could expect from moves like Comcast’s is that very per-network balkanization of streaming material I just mentioned. It’s more profitable for a strong network to deploy its own streaming services than to deal with a dozen or more aggregators. The streaming model liberates the content owners, and let’s not forget that Comcast owns NBC Universal. Recall that NBC recently had a tiff with YouTube TV over pricing, and over whether Peacock (NBC’s streaming service) should be incorporated en masse into YouTube TV. It’s hard not to see this as a bit of content chest-butting, and linked to the Comcast decision.

The big question is whether Comcast will take things beyond XClass TVs and into dongles or even just standalone streaming. Comcast said (in the referenced article) that XClass TV is their first initiative to sell their streaming service out of their own area, and without Comcast broadband underneath. Linking their out-of-region stuff to TV sales slows down the impact, which is surely what Comcast intends. If the whole notion lays an egg, it’s easy to pull out without compromising a later attempt to go national through some other mechanism. It also lets Comcast gauge the impact of their move on other networks, and competitors.

If we were to see a decisive shift to every-network-for-themself thinking, it bodes ill for the smaller networks who will have a problem raising enough subscribers to field an offering. Might big aggregators of network content then end up depending on those smaller and more specialized players.

Another question that the Comcast moves raises is “What about Amazon?” Amazon’s Prime Video service is widely used, but it doesn’t include live content except through “Channels” which are relationships with live-TV network providers. Might Amazon step up? They already produce multiple TV series of their own, releasing all the episodes in a season at once. Might they adopt a more traditional “it’s-on-at-this-time” model? Might they become an aggregator of smaller networks who can’t stream on their own?

And if Comcast follows XClass TV with XClass Dongle, doesn’t that make them a player like Google, who offers both YouTube TV, Android TVs, and Chromecast for Google TV? The breadth of Google’s play could well be at the root of their dispute with Roku, a dispute that threatens to take YouTube TV off Roku devices (and presumably onto Google’s own dongles).

XClass could be trouble for competitors, and a driver of change in the industry. It could also create a challenge for Comcast. The great majority of its customers are watching linear TV, and the cable industry overall is still largely focused on that delivery model for video. Changing things would mean retrofitting not only the cable plant, but also the customer premises cable modem equipment. It may be that Comcast is signaling that seemingly dire course is now definitely the path of the future. If that’s the case, then it decides the DOCSIS evolution debates, and signals the death of linear TV except (perhaps) in the broadcast-and-antenna world. Streaming, in any event, is clearly winning.

What the Recognition of a New Cloud Hardware Model Means

Specialized microprocessors for smartphones aren’t anything new, but we’re starting to see (from Google, for example) custom processors for smartphones from the phone vendors themselves. Apple has introduced their own microprocessor chip for PCs, and Amazon has used its own customized chip for some cloud services since 2018. NVIDIA GPUs have been available in the cloud for some time, too. Now the story (from Wall Street) is that Alibaba is going to launch a line of chips for use in cloud computing servers. Are we seeing the end of x86 chip dominance, or maybe even the age of merchant microprocessors?

Microprocessors are fundamental to the cloud, of course. If we look at the cloud as a simple utility computing model, with cloud IaaS competing with on-premises VMs, we might expect that having a specialized chip could offer a cloud provider an advantage in cost/performance that could translate into a competitive advantage. But how much of that “if-we-look-at” statement is really true? Could Alibaba have other reasons for wanting their own chip?

Both the AWS and Alibaba chips are based on an ARM CPU (Apple’s M1 chip is also based on the ARM architecture), which is a totally different processor architecture than the classic x86 chips of Intel and AMD. That means that binaries designed for x86 (and/or x64) won’t run on ARM-based chips. If we were to presume that cloud computing was all about “moving existing applications to the cloud”, this would be an almost-insurmountable problem, because most third-party software is delivered in x86 form only. But that’s not what the cloud is about, and we have to start with that truth.

Cloud computing today is dominated by GUI-centric development, either for social-network providers or for enterprises building front-ends to legacy applications that will themselves stay in the data center. This is new coding, and in almost all cases is coding done in a high-level language, rather than in “assembler” or “machine language” that is specific to the microprocessor architecture. You can get a compiler for most popular high-level languages for ARM chips, so new development fits the alternative chip architecture. In fact, GUI-specific apps seem to run much faster on ARM chips, largely because the x86 architecture was designed for general-purpose computing rather than real-time computing, which is what most cloud development really is.

The reason for ARM’s real-time benefit is that the term is an acronym for “Advanced RISC Architecture”, where RISC means “Reduced Instruction Set Computing”. RISC processors are designed to use simple instructions that have a very low execution overhead. There are no complex instructions, which can mean that doing some complicated things will require a whole series of instructions that on an x86 machine would be done in one. Real-time processes usually don’t have those “complicated things” to do, though, and so ARM is a great model for the front-end activities that really dominate the cloud today. It’s also great for mobile phones, which is why RISC/ARM architectures dominate there.

None of this should be a surprise, but perhaps the “move-to-the-cloud” mythology got the best of everyone. NVIDIA is trying (with regulatory push-back) to buy Arm (the company), and I think the reason is that they figured out what was really going on in the market. The majority of new devices, and the majority of new cloud applications, will be great candidates for ARM (the processor). So does this mean that Alibaba is doing the right thing? Yes. Does it mean that it will gain share on Amazon and other cloud giants? No.

Obviously, Amazon is already offering ARM hosting, but so far the other major cloud providers don’t. That’s very likely to change; some sources on Wall Street tell me that both Microsoft and Google will offer ARM/RISC processor instances within six months. Alibaba’s own move would be likely to generate more interest by the second and third of our cloud giants, but I suspect that Amazon’s recent successes with ARM would be enough. There are some extra issues that a cloud provider has to face if they offer ARM hosting, but they’re clearly not deal-breakers.

The most significant issue with ARM hosting is the web services library that the operator offers. If the web services are designed to be run locally with the application, then they’d have to be duplicated in ARM form in order to be used. It’s possible to run x86 on ARM via an emulator, but performance is almost certain to be an issue.

A close second, issue-wise is the possible confusion of cloud users. Some binaries will work on ARM and others on x86/x64, and you can’t mix the two. In cloudbursting situations, this could present issues because data centers rarely have ARM servers, so the data center can’t back up an ARM cloud. Similarly, an ARM cloud can’t back up the data center, and you can’t scale across the ARM/x86 boundary either. All this means taking great care in designing hybrid applications.

Another issue is economy of scale, and this issue is hard to judge because of edge computing. A major cloud provider could almost certainly offer enough ARM hosts to achieve good economy of scale, within a couple percent of what they have overall. However, edge computing necessarily creates smaller resource pools and so further dividing an edge pool could threaten edge economics and ARM benefits. The question is whether edge applications, which are likely real-time in nature, could be so much better served with ARM hosts that the edge would go all-ARM.

The ARM question is really an indicator of a major shift in cloud-think. We’re finally realizing that what runs in the cloud is not only something that was written for the cloud and not moved from the data center, but also something that may have little in common with traditional data center applications. That’s why ARM/RISC systems, GPUs, and perhaps other hardware innovations are coming to light; it’s a different world in the cloud.

The edge is even more so, and there’s still time to figure out what an optimum edge would look like. That’s a useful step in framing out an architecture model for edge applications, something I’ve been advocating for quite a while. The trick is going to be preventing a debate over processor architecture from distracting from the architectural issues. There’s more to software than just the compatibility of the binary image, and I think that the hardware insights we’re now seeing will be followed by software architecture insights, likely as soon as early next year.

Just How Real Could our Virtual Metaverse Be?

Facebook is said to be considering renaming itself to claim ownership of the “metaverse”, which has led to many (especially those who, like me, are hardly part of the youth culture) wonder just what that means. The fact is that the metaverse is important, perhaps even pivotal, in our online evolution. It may also be pivotal in realizing things like the “contextual” applications I’ve blogged about.

At the high level, the term “metaverse” defines one or many sets of virtual/artificial/augmented (VR/AR) realities. Games where the player is represented by an avatar are an example, and so are social-network concepts like the venerable Second Life. Since we’ve had these things for decades (Dungeons and Dragons, or D&D, was a role-play metaverse and it’s almost 50 years old) you’d be right thinking that new developments have changed the way we think about this high-level view, and you’d be right.

Facebook’s fascination with the metaverse seems strongly linked with social media, despite the company’s comments that it views the metaverse as a shift. Social media is an anemic version of a virtual reality, something like the D&D model, that relied on imagination to frame the virtual world. Metaverse presumes that the attraction of social media could be magnified by making that virtual world more realistic.

Many people today post profile pictures that don’t reflect their real/current appearance. In a metaverse, of course, you could be represented by an avatar that looked any way you like. Somebody would be selling these, of course, including one-off NFT avatars. There would also be a potential for creating (and selling) “worlds” that could be the environment in which users/members/players interacted. You can see why Facebook might be very interested in this sort of thing, but that doesn’t mean it would be an easy transformation.

One issue to be faced is simple; value. We’ve probably all seen avatars collaborating as proxies for real workers, and if we presumed a metaverse could implemented properly, that could likely be done. The question is whether businesses would value the result. Sure, I could assume that a virtual-me wrote on a virtual-whiteboard and other virtual-you types read the result through artificial reality goggles, but would that actually increase our productivity? Right now, we’re all talking as though metaverse was an established technology, and positing benefits based on the most extensive implementation. Is that even possible?

Metaverse today demands a high degree of immersion in a virtual reality (like a game) and a high-level integration of the real world with augmentation elements in augmented reality scenarios. Most aficionados believe that metaverses require AR/VR goggles, a game controller or even body sensors to mimic movements, and a highly realistic and customized avatar representing each person. As such, a metaverse demands a whole new approach to distributed and edge computing. In fact, you could argue that a specific set of principles would have to govern the whole process.

The first principle is that a metaverse has to conform to its own natural rules. The rules don’t have to match the real world (gravity might work differently or not at all, and people might be able to change shapes and properties, for example) but the rules have to be there, even a rule that says that there are no natural rules in this particular metaverse. The key thing is that conformance to the rules has to be built into the architecture that creates the metaverse, and no implementation issues can impact the way that the metaverse and its rules are navigated.

The second principle is that a metaverse must be a convincing experience. Those who accept the natural rules of the metaverse must see those rules in action when they’re in the metaverse. If you’re represented by an avatar, the avatar must represent you without visual/audible contradictions that would make the whole metaverse hard to believe.

Rule three is that the implementation of a metaverse must convey the relationships of its members and its environments equally well to all. This is the most difficult of the principles, the one that makes the implementation particularly challenging. We might expect, in the real world, to greet someone with a hug or a handshake, and we’d have to be able to do that in the metaverse even though the “someones” might be a variable and considerable geographic distance from each other.

Rule one would be fairly easy to follow; the only issues would emerge if the implementation of a metaverse interfered with consistent “natural-for-this-metaverse” behavior. It’s rules two and three, and in particular how they interact in an implementation, that creates the issue.

If you’ve ever been involved in an online meeting with a significant audio/video sync issue, or just watched a TV show that was out of sync, you know how annoying that sort of thing is, and in those cases it’s really a fairly minor dialog sync problem. Imagine trying to “live” in a metaverse with others, where their behavior wasn’t faithfully synchronized with each other, and with you. Issues in synchronization across avatars and the background would surely compromise realism (rule two) and if they resulted in a different view of the metaverse for its inhabitants, it would violate rule three.

Latency is obviously an issue with the metaverse concept, which is why metaverse evolution is increasingly seen as an edge computing application. It’s not that simple, of course. Social media contacts are spread out globally, which means that there isn’t any single point that would be “a close edge” to any given community. You could host an individual’s view of the metaverse locally, but that would work only as long as there were no other inhabitants who weren’t local to the same edge hosting point. If you tried to introduce a “standard delay” to synchronize the world view of the metaverse for all, you’d introduce a delay for all that would surely violate rule two.

An easy on-ramp to a metaverse to avoid a problem with latency would be to limit the kinds of interactions. Gaming where a player acts against generated characters is an example of this. To avoid latency problems when players/inhabitants interact with each other would require limiting interactions to the kind that latency wouldn’t impact severely. We may see this approach taken by Facebook and others initially, because social-media users wouldn’t initially expect to perform real-world physical interactions like shaking hands. However, I think this eventually becomes a rule two problem. That would mean that controlling latency could end up as a metaverse implementation challenge.

One possible answer to this would be to create “local metaverses” that would represent real localities. People within those metaverses could interact naturally via common-edge technology. If someone wanted to interact from another locality, they might be constrained to use a “virtual communicator”, a metaverse facility to communicate with someone not local, just as they’d have to in the real world.

Another solution that might be more appealing to Facebook would be to provide rich metaverse connectivity by providing rich edge connectivity. If we supposed that we could create direct edge-to-edge links globally, each of which could constrain latency, then we could synchronize metaverse actions reasonably well, even if the inhabitants were distributed globally. How constrained latency would have to be is subjective; gaming pros tell me that 50 ms would be ideal, 100 ms would be acceptable, and 200 ms might be tolerable. The speed of light in fiber is roughly 128 thousand miles per second, so a hypothetical fiber mesh of edge facilities could deliver something in just over 90 ms anywhere on the globe, if there were no processing delays to consider.

The obvious problem with this is that a full mesh of edge sites would require an enormous investment. There are roughly 5,000 “metro areas” globally, so fully meshing them with simplex fiber paths would require almost 16 million fiber runs (8 million full-duplex connections). If we were to create star topologies of smaller metro-to-larger-metro areas, we could cut the number of meshed major metro areas down to about 1,000, but that only gets our fiber simplex count down to about a million paths. The more we work to reduce the direct paths, the more handling we introduce and the more handling latency is created.

Obviously some mixture of the two approaches is likely the only practical solution, and I think this is what Facebook has in mind in the longer term. They may start with local communities where latency can be controlled and allow rich interaction, then see where they could create enough edge connectivity to expand community size without compromising revenue.

Telcos and cloud providers, of course, could go further. Google and Amazon both have considerable video caching (CDN) technology in place, and they could expand the scope of that to include edge hosting. Same with CDN providers like Akamai, and social media providers like Facebook might hope that one of these outside players invest in heavily connected edge hosting, so they can take advantage of it.

Technology isn’t the problem here, it’s technology cost. We know how metaverse hosting would have to work to meet our three rules, but we don’t know whether it can earn enough to justify the cost. That means that the kind of rich metaverse everyone wants to talk and write about isn’t a sure thing yet, and it may even take years for it to come to pass. Meanwhile, we’ll have to make do with physical reality, I guess.

Fixing the Internet: Nibbles, Bites, Layers, and Parallels

The recent Facebook outage, which took down all the company’s services and much of its internal IT structure, certainly frustrated users, pressured the company’s network operations staff, and alarmed Internet-watchers. The details of the problem are still sketchy, but there’s a good account of how it evolved available from Cloudflare. Facebook said that human error was at the bottom of the outage, but the root cause may lie far deeper, in how we’ve built the Internet.

Most networking people understand that the Internet evolved from a government-research-and-university project. The core protocols, like TCP and IP, came from there, and if you know (or read) the history, you’ll find that many of the aspects of those early core protocols have proven useless or worse in today’s Internet. Some have been replaced, but others have evolved.

If something proves to be wrong, making it right is the obvious choice, but it’s complicated when the “something” is already widely used. When the Worldwide Web came along in the 1990s, it created the first credible consumer data service, and quickly built a presence in the lives of both ordinary people and the companies and resources they interact with. That success made it difficult to make massive changes to elements of those core protocols that were widely used. We face the consequences every day.

Most of the Internet experts I talk with would say that if we were developing protocols and technologies for the Internet today, from scratch, almost all of them would be radically changed. The inertia created by adoption makes this nearly impossible. Technology and Internet business relationships are interwoven with our dependence on the Internet, and to liken the Internet to a glacier understates the reality. It’s more like an ice age.

BGP appears to be at the root of the Facebook problem, and most Internet routing professionals know that BGP is complicated and (according to many) almost impossibly so. The workings of the Domain Name Service (DNS) that translates commonly used URLs into IP addresses, also played a part. BGP is the protocol that advertises routes between Internet Autonomous Systems (ASs), but it’s taken on a lot of other roles (including roles in MPLS) over time. It’s the protocol that many Internet experts say could benefit from a complete redesign, but they admit that it might be totally impossible to do something that radical.

It’s demonstrably not “totally impossible”, but it may be extraordinarily complicated. SDN, in its “true” ONF OpenFlow form, was designed to eliminate adaptive routing and centralize route control. Google has used this principle to create what appears to be an Autonomous System domain of routers but is actually an SDN network. The problem is that to get there, they had to surround SDN with a BGP proxy layer so that the Google stuff would work with the rest of the Internet. Could another layer of SDN replace that proxy, letting AS communications over some secure channel replace BGP? Maybe.

Then there’s DARP, not the defense agency or DARPA that (in its earlier ARPA form), but Distributed Autonomous Routing Protocol. DARP was created by startup Syntropy, who has a whole suite of solutions to current Internet issues, including some relating to fundamental security, fostering what’s called “Web3”. DARP uses Syntropy technology to build a picture of Internet connectivity and performance. It’s built as a kind of parallel Internet that looks into/down-on the current Internet and provides insights into what’s happening. However, it can also step in to route traffic if it has superior routes available. This means the current Internet could be made to evolve to a new state, or that it could use DARP/Syntropy information to drive legacy node decisions on routes.

The security issues of the Internet go beyond the potential issues like BGP, of course. Many feel that we need to rethink the Internet in light of current applications, and its broad role in our lives and businesses. The Web3 initiative is one result of that. It’s explained HERE and hosted HERE, and it has the potential for revolutionizing the Internet. Web3 has a lot of smarts going for it, but working against it is the almost-religious dedication many in the Internet community have to the protocol status quo. The media also tends to treat anything related to changing the Internet as reactionary at best and sinister on the average.

The scope of Web3 is considerable: “Verifiable, Trustless, Self-governing, Permissionless, Distributed and robust, Stateful, and Native built-in payments,” to paraphrase my first link content. There’s a strong and broad reliance on token exchanges, including blockchain and even a nod at cryptocurrency. The first of my two references above offer a good explanation, and I don’t propose to do a tutorial on it here, just comment on its mission and impact.

There is little question that Web3 would fix a lot of the issues the Internet has today. I think it’s very likely that it would create some new issues, simply because some players with something as enormous and important as the Internet is going to respond to change as a threat, and will try to game the new as much as they have the old. The fact that Web3 has virtually no visibility among Internet users, and only modest visibility within the more technical Internet/IP community, demonstrates that the concept’s biggest risk is simply becoming irrelevant. People will try band-aids before they consider emergency care.

That’s particularly true when the band-aid producers are often driving the PR. Network security is now a major industry, and the Internet is creating a major part of the risk and contributing relatively little to eliminating it. We layer security on top of things, and that process creates an endless opportunity for new layers, new products, new revenues. This has worked to vendors’ benefit for a decade or more, and they’re in no hurry to push an alternative that might make things secure in a single stroke. In any event, any major change in security practices would tend to erode the value of being an incumbent in the space. Since most network vendors who’d have to build to Web3 are security product incumbents, you can guess their level of enthusiasm.

They have reason to be cautious. Web3 is so transformative it’s almost guaranteed to offend every Internet constituency in some way. The benefits of something radically new are often unclear, but the risks are usually very evident. That’s the case with Web3. I’ve been involved in initiatives to transform some pieces of the Internet and its business model for decades, and even little pieces of change have usually failed. A big one either sells based on its sheer power, or makes so many enemies that friends won’t matter.

Doing nothing hasn’t helped Internet security, stability, or availability, but doing something at random won’t necessarily help either, and in fact could make things worse. I see two problems with Web3 that the group will have to navigate.

The first problem is whether parallel evolution of Internet fundamentals can deliver more than layered evolution. When does the Internet have too many layers for users/buyers to tolerate? When does a cake have too many layers? When you can’t get your mouth open wide enough to eat it. The obvious problem is that a single big layer is as hard to bite into as a bunch of smaller ones. Things like the Facebook problem should be convincing us that our jaws are in increased layer jeopardy, and it may be time to rethink things. The trick may be to make sure the parallel-Internet concepts of Web3 actually pay off for enough stakeholders, quickly enough, to catch on, rather than die on the vine.

The second problem is the classic problem with parallelism, which is how much it can deliver early on, particularly to users who are still dependent on traditional Internet. It seems to me that Web3 could deliver value without converting a global market’s worth of user systems, but more value when most browsers embraced it. Is the limited value enough to sustain momentum, to advance Web3 to the point where popular tools would support it? I don’t know, and I wonder if that point has been fully addressed.

My view here is that Web3 is a good model, but the thing that keeps it from being a great model is that it bites off so much that chewing isn’t just problematic, it’s likely impossible. What I’d like to see is something that’s designed to add value to security and availability, rather than something that tries to solve every possible Internet problem. The idea here is good, but the execution to me seems just too extreme.

VMware Prepares for Life on its Own

VMware, like most vendors, has regular events designed to showcase their new products and services, and VMware’s VMWorld 2021 event is such a showcase. The stories the company told this year are particularly important given that the separation of VMware and Dell is onrushing, and everyone (including Wall Street and VMware’s customers) are wondering how the firm will navigate the change. It’s always difficult to theme out an event like this, because there are usually announcements touching many different technologies, so trying to lay out main themes is important.

The first such “main theme” is the cloud, meaning specifically hybrid and multi-cloud. VMware has a very strong position in the data center, earned largely by its early leadership in virtual machines. The cloud has pretty clearly moved on to containers, and in any event, containers are much easier to deploy because they carry their application configuration information with them. VMware has an excellent container framework in Tanzu, one of if not the best in the industry, but it’s been dancing a bit with harmonizing container futures with VM antecedents.

What seems to be emerging now is fairly straightforward. If we assume that applications have to deploy in the cloud and data center, and that “the cloud” means two or more clouds (multi-cloud), then there is a strong argument that an enterprise with this sort of deployment model could well want to use VMs in both the data center and in all their public clouds, and then use Tanzu to host containerized applications and tools in those VMs. This is the Tanzu-on-vSphere That would create a unified container deployment environment across everything, based on Tanzu, and support vSphere VM applications as usual.

The positioning anchor for all of this seems to be application modernization (AppMod), which is smart because the concept includes the creation of cloud front-ends for legacy applications, not just the “moving to the cloud” model that I think is out of touch with current software evolution reality. Tanzu, as the company indicated in its conference, is the future of VMware, and that realization is perhaps the most critical point in their overall positioning. The company seems committed to shedding its relentless VM focus, and that’s essential if they’re to remain relevant.

My only concern with the Tanzu positioning is that there are a bewildering set of products cited, and it’s not easy to establish the role of each and their relationship to any specific mission. That’s particularly noticeable in networking, which VMware spreads across at least three of their major announcement topical areas. VMware’s NSX virtual networking strategy is perhaps the original offering in the space (they got the first version when they acquired Nicira, who was the first big player in the space), and I think it would have been smart for VMware to try to focus networking on NSX just as they’re trying to focus hosting on Tanzu.

VMware has been active in the telco vertical for years, but their presentation of their telco products lump them with SD-WAN in the “Edge” category. If you were to presume that “edge” means “edge hosting”, then it would be logical to say that Tanzu belongs there, and in fact the strongest possible positioning for VMware in the telco space would be based on Tanzu, with support from specialized telco offerings (VMware Telco Cloud Infrastructure and Telco Cloud Platform) and virtual networking (NSX and SD-WAN). They do claim to support the evolution from VNFs to CNFs, but their solution brief for telcos doesn’t mention Tanzu and thus doesn’t link in their main enterprise strategy, nor their edge computing.

At VMWorld, they announced the “Cloud Native Box”, a joint activity with partner Accenture. This is what the VMware blog said: “The Cloud Native Box is a market-ready solution that supports multiple use cases with unlimited potential for specific deployment models, from core to edge and private networks solutions, depending on the business demands of each company. As a pre-engineered solution it offers proven interoperability among radio components, open computing, leading edge VMware Telco Cloud Platform (TCP) and a plethora of multivendor network workloads, with unparalleled lifecycle management automation capabilities.” It seems pretty clear that Cloud Native Box is aimed at resolving the telco issues I just cited, but how it does that can’t be extracted from the VMware blog, so we’ll have to wait to assess what the announcement will mean.

From when VMware acquired Pivotal and their Cloud Foundry cloud-native development and deployment tools, there’s been a bit of a struggle to avoid having two persistent container/cloud strategies in parallel. The show indicates that VMware is making progress integrating Pivotal Cloud Foundry with Tanzu, and the challenge hasn’t been eased by the loosey-goosey definition of “cloud native” that’s emerged. Tanzu Application Services is how Cloud Foundry stuff is currently packaged, but many see the “Tanzu” name more as an implied direction than as an appropriate label for current progress in consolidating the two container frameworks.

The difficulty here is that Tanzu Application Services is really a cloud-native development and deployment environment, and Kubernetes is really container orchestration. There’s real value in the old Cloud Foundry stuff, beyond the installed base, and of course VMware wants to increase the customer/prospect value in the integration and not toss everything to the wolves. They’re not there yet, but they’re making progress.

I think that there are two factors that have limited VMware’s ability to promote its stuff effectively. One is the classic challenge of base-versus-new. VMware has a large enterprise installed base who obviously know VMware-speak, and retaining that base is very important, particularly to the sales force. Not only does that guide positioning decisions, it also influences product development, aiming it more on evolution (of the base, obviously) than on new adoption. That’s reasonable in the enterprise space because the base is large, but it doesn’t serve well in the telco world.

The second factor is a positioning conservatism that I think developed out of the Dell/VMware relationship. Obviously, the two companies need to be somewhat symbiotic, which means that they can’t position in a way that essentially makes them concept competitors. Now that there’s going to be a spin-out-or-off-or-whatever, VMware will have to stand on its own, but until then it’s important that neither company rocks the collective boat too much.

Any major business change creates challenges, and either major M&A or spin-outs are surely major business changes. Executives are always preoccupied with these shifts, and in the case of VMware, so are many critical employees. Some look forward to being independent, thinking they were constrained by Dell, which they were. Some fear it, concerned that Dell might shift its focus to a broader mix of platform software, at VMware’s expense, which they likely will. In short, nobody really knows how this is going to turn out, and what the best strategy for VMware will be for the future.

Whatever it is, it needs to address the issues I’ve cited above. In fact, it needs to address them more than ever, because optimizing the favorable implications of the spin-out and minimizing the risk of the negative starts with having a story that’s not just coherent and cohesive, but also exciting beyond the VMware base. Up to now, getting that story has proved problematic for VMware, and they can’t afford to let those past positioning difficulties contaminate their future.