VMware and the Open Grid Alliance MIGHT Move the Edge Compute Ball

VMware has been making a lot of Open RAN announcements, and that’s important. Yes, it demonstrates the strength of the whole open-model network approach. Yes, it demonstrates the strength of the Open RAN movement and the impact that might have on telecom infrastructure. The real story, though, is that all Open RAN successes don’t have equal potential in creating a seismic shift in the market, and that depends on just how a given strategy leverages those two “demonstrates”.

Open RAN has three essential dimensions. First, it’s a software model for how to make the 5G RAN more open than 3GPP specs alone would allow. That limits the influence that the giant mobile-network players have on the market. Second, it defines an open model for function hosting, a contender for the next generation of NFV. Finally, it potentially defines the launch point for a general edge computing vision. From a market impact perspective, it would be ideal to address all three of these dimensions in an implementation.

Whether that happens depends in large part on just who is proposing that implementation. We’ve all seen, over the last decade in particular, that vendors do the stuff that boosts their own financials, not the thing that advances the market overall. That’s not a bad, nor an unexpected, thing. However, it means that a given vendor is likely to work to maximize only those of the three essential Open RAN dimensions suit their own business needs. The narrower those needs, the less the chance a vendor will cover the market-wide bases.

We also have vendor categories to match with our three dimensions, then. One category is the network vendor, another is the IT vendor, and the third is the cloud vendor. Network vendors are very likely to focus on the first of our three Open RAN dimensions, excluding the others. IT vendors are likely to focus primarily on the second dimension, with perhaps a nod to the other two. It’s the cloud vendors, which include both cloud providers and cloud platform software providers, that have the most overlap between their own business goals and the three dimensions of Open RAN.

VMware is one of those third-category vendors. They’ve been very active in the Open RAN space, but they’re also a major player in cloud platform software, and because they’ve cultivated relationships with public cloud providers as well as server players, they have a foot in the door of the hosting dimension. One particular thing seems to define the VMware position in our three dimensions, its role in the Open Grid Alliance.

The Open Grid Alliance or OGA was established by VMware and Vapor IO in April of last year. It bills itself as “the next step in the evolution of the Internet”, which is a claim a lot of technologies make these days, including Web3. If the claim is common enough to be almost trite, the pathway to fulfilling it is not. What the OGA is working on is a model of distributed processing, creating what’s essentially a global compute fabric created by uniting compute resources in the cloud, online, and elsewhere.

The term “grid” may be a bit overloaded here. Originally, grid computing was a structure to create a parallel-processing model for applications that can be divided into what are essentially independent processes that do something unique, but is then combined to become parts of some over-arching and glorious goal. The primary applications are more statistical and scientific. OGA is taking the step of broadening the notion of a grid to envelope any mission for cooperative, distributed, processing on the one hand, but also the notion that this new “grid” has to be able to support any number of separate applications as well.

The reason OGA is important to our assessment of VMware and Open RAN is that it would characterize Open RAN elements as simply applications of the grid. Things like IoT and the metaverse would also be applications of the grid, so the aim here is to create a platform that can host all of the emerging online application elements. By framing Open RAN in what’s a broad distributed/cooperative computing model, the OGA approach covers all three dimensions of Open RAN impact. It also takes a mighty technical and market bite.

If OGA really aspires to define a new model for the Internet, a model that’s based as much (or even more) on cooperative computing than on networking, it’s attacking a mission that may well be unequaled in size anywhere in tech today. It’s a mission that overlaps with everything in computing, the cloud, and online services, and a number of other new initiatives like Google’s Nephio, blockchain, Web3, and the metaverse. I’m not for a moment suggesting that a broad technical scope isn’t a major asset in terms of addressing the future evolution of all these areas, only that it approaches and perhaps passes the “boiling the ocean” threshold.

This need to address a very broad scope may be why there’s little information on just what OGA proposes, other than good intentions. From the intentions, it seems like OGA is targeting more the relationship between compute elements deployed in a grid model, than on how to do the deployment. That would make it a higher-level attack on edge computing than Nephio proposes, but just how high a level is being addressed or how it would work isn’t covered yet.

The broad scope also puts the OGA approach in collision with a lot of major players. VMware is the second-largest company in the OGA (Deutsche Telekom is the largest). Dell, who’s had an obvious relationship with VMware, is another large member, but none of the other server or software giants are involved, nor are any of the major network vendors or cloud providers. The fact that there’s not a herd of telecom players in the OGA is perhaps the biggest challenge, because without broad telco support it will be difficult to promote broader vendor participation.

VMware’s advantage here is that they’re in what may be the best position to address these issues. Their cloud and virtualization software suite is as good as or better than anything else in the industry. They have a very strong Open RAN position, taken by a quality telecom industry group within the company. They have, as I’ve noted, relationships with the public cloud providers and also with server vendors. They’re largely network-vendor-agnostic, though their SD-WAN stuff collides with similar offerings by players like Cisco. The telcos know them and respect their products.

The revolutionary potential of my three dimensions lies in getting them all under one roof, technology-wise at least. Since VMware is the technology heart of OGA, they can control whether that’s accomplished simply by what they field, what they do, on their own. Getting others to cooperate would make things even better, but it’s not essential. This is VMware’s game to win, or lose, and to me that means that VMware needs to step up and drive the OGA to offer more than glitzy promises.

There’s always a “but”, it seems, and there is one here. It’s the all-too-common issue of marketing and positioning. I can sell a tree much more easily than I can sell a forest; that’s an easy point to get agreement on. It’s also true that I can market the tree more easily. The value proposition is easier to explain in no small part because the properties of and applications for a tree are easy to describe. The forest situation is another matter completely. Like many tech companies, VMware has struggled to position its stuff to people with real budget authority. That’s fine if you’re selling something to technical types, but that will be true only when the need for the class of product is widely accepted and it’s only a matter of selecting the supplier. Grids don’t fit into that; there will have to be a solid strategy to position the concept, and do so relative to all three of our dimensions of Open RAN.

The good news for both VMware and the OGA is that the marketing and positioning will be at least as difficult for anyone else as for VMware and the OGA, and possibly even harder. The OGA has both at technical and market lead on the issues, and even though the positioning of both VMware and the OGA could and should have been stronger from the first, strength is relative and everyone else is even weaker. However, this isn’t going to last forever. There are not only risks that others (Red Hat comes to mind, as does HPE) who might jump out and do something arresting, but also risks that the multi-dimensional approach itself could be contaminated by the “death of a thousand components”. If the broad mission is fragmented by piece-part implementations that are highly credible, the value of the broad approach is reduced.

Public Cloud Winners and Winning Strategies

The rich get richer, so the saying goes, and in the public cloud space that’s likely true, because the big are getting bigger. According to an article in SDxCentral, the big three of public cloud (Amazon, Google, and Microsoft) took a larger share of cloud spending—62% in the latest quarter versus 58% a year ago. The market share of these giants stayed roughly the same according to the same research. What’s happening underneath is, or should be, the big question. There is movement, both real and potential, in the cloud market according to almost everyone, and I see signs of it in my contacts with enterprises.

The public cloud market is really two markets. One, the segment most people think about, is the public cloud spending by “ordinary” businesses, the enterprises and some SMBs. The other is the cloud spending by Internet or OTT companies, especially startups. Amazon’s cloud lead, and much of its growth, comes from the second segment, while Microsoft and Google do better with the first. This distribution of spending is important for a number of reasons, some pedestrian and some with broad industry implications.

One important, and obviously industry-impactful, point is whether telecom is an ordinary business or related to the Internet and OTT side. Telecom represents an enormous growth opportunity for the cloud, and if current cloud-provider dominance patterns are followed for telecom, then whether it’s “enterprise” or OTT really matters. What I hear from the industry is that the telcos would rather deal with Microsoft or Google than with Amazon for cloud services, but many are planning a bit of a multi-cloud hedge in the near term as they work out just what they’d do with the public cloud.

The largest driver for public cloud use by the telcos is 5G hosting. There are a number of drivers for that, but the two leading ones are the need to supply 5G hosting out of their home regions, where they have central office real estate, and the question of whether they could deploy efficient (in economy of scale terms) infrastructure even within their region. However, some operators are looking ahead to other edge computing missions, and some want to offer enterprises cloud services by rebundling the services from public cloud providers. These last two missions are often seen as a way of building a bigger commitment to cloud usage, to secure better pricing.

A less pressing but potentially even more important question relates to the metaverse. A number of companies have been selling NFT “lots” or “land” in a metaverse, demonstrating that there will surely be many metaverses created. The infrastructure needed to host a truly functional metaverse would be far more expensive than any startup would likely be able to afford, so the presumption here is that most will rely on public cloud/edge resources. The more metaverses there are, and the more that align more generally with digital twinning than with specific social-media missions, the more these will resemble enterprises rather than OTTs.

In a sense, these virtual countries or worlds are aimed at creating a kind of portable metaverse user base by collecting investors who are then obviously incentivized to support the metaverse they’ve invested in. A cloud provider, a social network provider, or even a mass online retailer (Amazon comes to mind) could decide to offer virtual-world-hosting capability, and by doing so pull all the users, and any companies who then work to build experiences within their virtual worlds, into their domain. Since only large players could hope to sell broad metaverse hosting, this would create a model much like that of the OTT world.

Small-scale metaverses, meaning in particular those designed to support a limited geography, will certainly look like enterprise public cloud consumers in service terms. Non-social metaverses, or social metaverses designed to augment the real world rather than create an alternate reality, would be particularly likely to have constrained geographies, even perhaps down to the level of a single company or application within a company. I’ve noted before that a “metaverse of things” or MoT might become the general model for an IoT mission, and MoT hosting in this situation is almost certainly going to mean hybridizing local compute and sensor/controller technology with the cloud. This is very much like an enterprise hybrid cloud mission of the kind that Microsoft is already winning with, and that Google is surely targeting.

A broad question that arises with any metaverse model is the traffic associated with delivering the experience to the user. Obviously a part of that is the kind of experience we’re talking about; visual experiences are richer in an information content sense. Another part is the hosting model, and in particular the extent to which the user’s individual device or local compute resource participates in the metaverse. Do we render a model of a metaverse “locale” locally to each user, or in an edge data center? If the former, we need send only the model, and if the latter we need to send the current view of the model, much more bandwidth-consuming.

The question of the metaverse model and the nature of the experience it’s designed to deliver is the most complex element of future cloud opportunity, and thus the thing most likely to have a large and unexpected impact on market share. This, IMHO, is the place where Google could shine, since I believe Google is the most technologically sophisticated of the public cloud providers. However, that’s a weakness as well as a strength, and that plays to the last of the factors that could reshape the public cloud market—positioning.

Amazon understands startups. Microsoft understands enterprises, and Google understands technology. Google’s challenge is that it may well be too smart. The company has a bit of the classic problem of being too geeky to communicate with the real world. Amazon’s problem is that it’s startup mission makes it a bit reactive to the VC interests and biases, and Microsoft’s is that its dominant base is conservative in terms of technology. All of these present positioning challenges with the future of the cloud.

For Amazon, the metaverse intersection with startups is the big question. If social startups focus on creating the virtual world rather than the metaverse experience, then can Amazon figure out what metaverse hosting requires, and deliver it? If not, then there’s a good chance they miss a big part of the metaverse driver, and if that driver is a dominant force for the cloud market of the future, they miss out on that too.

For Microsoft, metaverse success with the enterprise means either framing a purely collaborative vision, which is highly limiting and thus dangerous, or promoting an MoT vision, which is going to require a lot of positioning work and relies on IoT to drive enterprises to the cloud. Today, most IoT missions aren’t particularly cloud-centric; they focus on on-prem edge and the data center. Microsoft would need to invent an MoT model that was much more cloud-centric to win with it.

Google has the greatest challenge, but could reap the greatest reward if they could meet it. I’m constantly impressed by Google’s insight into the evolution of the cloud at the technical level, but their skills are such that they can be easily communicated only to highly skilled cloud people, not to people like C-level executives who can make budget decisions. In fact, it’s not particularly easy to engage those cloud people except at conferences where what you say and do is shared with all your competition. Judged by technical capabilities, Google has the absolute lead, but they’ve got to be able to demonstrate that leadership with a model of a metaverse and not with the tools used to build the model.

Cloud market share and sales growth has been fairly steady for recent years, and if that’s going to change then some major new force has to enter the market. Right now, metaverse either is that force, or it demonstrates what the force would have to be and do. Cloud providers who want to improve their position and revenue will need to consider this.

What Can We Expect from Ultra Fast Wireline Broadband?

The old “How low can you go?” question may be, for broadband at least, be augmented by the question “How much capacity can we sell?” As this story in Light Reading shows, at least some operator planners are looking onward to things like 10G consumer broadband services. While this might be generating what the media calls “good ink”, there are major questions relating to the viability of ultra fast wireline broadband. Are we falling into the “5G trap” with wireline, too?

By now, many people realize that the vaunted 5G speed increases don’t really make much of a difference in the experiences mobile users are seeking out. You can’t make video better by pushing it out at a high multiple of the characteristic data rate of the material, which is limited by the capability of smartphones to display high-res content and our ability to see better video on a small screen. But did the publicity on 5G’s speed help promote 5G phones, and thus accelerate 5G deployment? Maybe, but even if that’s true, there are still questions.

What’s interesting about 5G is that it’s a technology shift that has zero chance of failing, and that’s been true from day one. It’s the next step in modern wireless technology. The question was always whether it would generate any incremental revenue, since a shift to 5G was surely going to demand an increased investment in infrastructure. That same question is relevant to faster wireline broadband.

There is no question that broadband consumption has grown steadily over the last couple decades. What’s behind that growth is obviously “more” of something, but the “something” here is video content. We are doing a lot more streaming than ever before, and our consumption of streaming video directly drives up bandwidth. On the basis of consumption history, 10G might not sound too outlandish. But….

….but how much video can really be consumed? The average family of four could all be streaming their own material, and in some cases they do, but my friends in the ISP world tell me that most households don’t have more than two television sets active at once, and that the largest consumer of streams as opposed to bandwidth is get-togethers that involve multiple people using their phones for streaming. A gathering with a dozen people, particularly young people, will often have multiple streams of video to phones because smartphone video tends to be viewed individually, given the limitation of the devices’ screens.

A high-resolution video stream requires about 8 Mbps of bandwidth. Most wireline broadband is moving toward a base speed of 50 to 100 Mbps, and that means (say my ISP engineering friends) that you could reasonably expect to support around five such streams on a base wireline service. However, if we assume that our video is delivered at only smartphone resolution, you’re down to maybe 4 Mbps (both these numbers don’t even assume optimum compression) and 10 streams. Given that many consumers can get 1Gbps today, and given that would support 75/150 streams depending on the per-stream requirements, it seems to me that we’re projecting capacity requirements growth beyond reasonable behavioral expectations.

That’s important because of the how-low-can-you-go question. Most consumers aren’t looking to throw money away, so they tend to be cautious on what they pay for home broadband. They’ll often go with a low-end package and increase speed if they need it, and that behavior shows in the distribution of customers by tier of service, which almost every ISP says groups customers at the low end of the capacity range available. If there’s a real demand for 10G speeds, then you’d expect to see people clustered at the highest speed currently available, and you don’t.

That resistance to paying more also limits the ISP ROI associated with any investment in higher broadband speeds. Even for business services, doubling the speed of a connection never results in doubling the revenue. In consumer broadband, at least in my own geography, the lowest speed available has increased by roughly four times over the last decade, but the price is only about 15% higher. The fact that prices are rising more slowly than capacity is the reason why “profit-per-bit” has fallen so sharply.

The obvious ISP response to this situation is what used to be called “overbooking”, meaning assigning more theoretical capacity than a network can actually carry. The speed of a connection is almost always measured by the rate at which the user interface is clocked. In TDM (time-division-multiplexed) networks, the entire connection path would be clocked at the interface speed, so a megabit interface would mean a megabit of transport capacity. In packet networks, transport capacity is shared with the expectation that traffic will have a random distribution of packets so one conversation’s peaks can fit in another’s valleys. But suppose we clock an interface a ten gig, knowing that we really have the transport capacity to support only the same actual packet rate we had with a one-gig interface?

This sort of thing would likely be caught by investigative journalists if not by regulators, but it might not be noticed if the ISP essentially gave away the 10x speed advantage. Why would they do that? Because the actual cost of the faster fiber connection could be very small if it wasn’t backed up by a commensurate upgrade in packet capacity deeper in the network, and the move would offer a competitive advantage. Since customer acquisition and retention is the largest component of opex, that could make sense.

Particularly given the fact that higher-capacity wireline broadband could be sold to businesses. Branch offices and SMB locations are usually in the same areas as residential users, and these sites could us, and pay for, much faster connections. That could make it smart for ISPs to deploy 10G-capable infrastructure and even offer a reasonable ROI on the incremental investment. After all, it’s the terminating gear that’s different, the fiber itself would likely be the same.

In a way, this is a bit like 5G. Companies like T-Mobile are pushing home broadband using mobile 5G infrastructure, and many are using millimeter-wave technology for that purpose. Remove the constraints on smartphone consumption of 5G and the higher speed can be sold. Same with wireline broadband; remove the presumed dependence on residential consumption, focus on business, and then sing the praises of your “faster” infrastructure, and you have a win.

A bit beneficiary of this move could be virtual networking, including and maybe especially SD-WAN. Fiber-PON-based 10G service would be a boon for almost any business site. Could some ISPs switch to 10G PON for most or all of their service delivery? Could we be looking at a transformation of IP VPN services, away from the MPLS VPNs to a different technology altogether? It’s possible, and if that happened it would be more of a revolution in networking, including the Internet, than a push to get consumers to 10G.

Are We Reaching the Limit of Ad Sponsorship?

The last week has demonstrated that some of the biggest Internet players are now facing financial, or at least stock-price, challenges. For many, the problem is in their revenue model. We’re seeing, in the aftermath of the pandemic, both the strength and the weakness in the “free ad-sponsored” vision of OTT services. The questions to be considered now are what happens next, what technologies might be impacted, and what vendors might benefit, or suffer, from the shifts.

One problem with ad sponsorship, a problem I’ve mentioned in prior blogs, is that ad spending tends to be fairly static, and at best tends to grow at the rate of GDP growth over time. There’s probably no OTT on the planet that would be satisfied with that level of growth, so everyone is depending on gaining market share. That, of course, means that some will lose it. Online services have been stealing share from other forms of advertising, but eventually the industry would have to reach an equilibrium, at which point share growth will be difficult and major gains in revenue without share growth would be possible only outside the model.

The pandemic created a shift in the steady-state picture, because it kept people at home and unable to exercise the traditional strategy of looking around stores for something to buy. We saw growth in online product searches and research, which of course created growth in online ads and ad revenues. The major players like Alphabet/Google, ByteDance/TikTok, Meta/Facebook, and Twitter all benefited from this, and all are seeing a slowing of growth because people are getting out more…or so classical wisdom says.

There may be more to it. One impact of the pandemic was to reduce in-store shopping, and while that impacted advertising focus it also shifted buyers to online retail fulfillment. Which raises a critical question; if I’m going to buy online from Amazon or BestBuy or eBay or Walmart, would I not start my search for a product by searching their sites? Yes, this would miss some of the smaller online sources, but with all the fear of online scams, many people are reluctant to buy from somebody they don’t know and trust. Direct retailer search, of course, bypasses ad influence, and it could actually reduce adspend.

Having a retail product isn’t a guarantee of continuous profit growth, as Amazon’s earnings last week illustrated. The shift back to “going to the store” will inevitably impact online sales, but every time people increase their dependence on online shopping versus storefront, there’s a group that doesn’t come back, at least not all the way, and there are some indicators that even those who are willing to go out and buy, or can’t wait for delivery, will do online price shopping, reducing the influence of advertising.

This isn’t the only problem for the OTTs either. Every OTT ad conduit is impacted by the bad behavior of some people, companies, organizations, or all of the above. The bad behavior is obviously in the eye of the beholder, but some users and advertisers are offended, and angst there can impact regulators, as it has in Europe.

Social media has never been good at self-policing because people aren’t good at it, and because their own revenue interests are often contrary to public good and regulatory policy. In recent years, there’s been so much outcry against bad behavior and misinformation that some platforms have started to work harder, but that isn’t enough for some regulators and is too much for some specific interest groups and investors, like Elon Musk, who has acquired (subject to approval) Twitter. Musk has advocated a mixture of more and less with respect to behavioral constraints, and the “less” part collides with the upcoming EU regulations.

Controversy feeds views, no matter whether we’re talking about news networks or social media. It also feeds disengagement and regulation, and the question now is whether people in general will adjust to a polarizing set of “facts” and learn to dismiss a lot of it, or whether the economic and political impacts will be dire enough to force remediation through government intervention. It’s hard to see how either outcome favors ad sponsorship.

I think all the ad-centric OTTs see this, though they’d not be likely to admit it publicly. I think Meta’s interest in the metaverse arises in part from the realization that they need to find a new revenue model, and that the best model might be one it would be difficult to apply retroactively to an established community like Facebook. I also think that these OTTs see a truth that content producers also see, which is that the best model for revenue gain is to sell something. Networks have been offering (or, in the case of CNN, trying to offer) streaming subscription services. Some existing streaming players, like Netflix, are starting to offer ad-sponsored services at a lower or zero cost, of course, but all that means is that nobody wants to leave any market segment or revenue model uncovered.

The sell-or-sponsor dilemma isn’t going to be resolved immediately in favor of “sell” but I think it’s inevitable that more and more “new” services will be focusing on a subscription model because the upside in ad sponsorship is small, and the potential regulatory impact (and the associated monitoring costs) are large.

The impact of this shift on tech overall is best presented as the contrast between Web3 and the metaverse. Web3 is all about decentralization, which means it’s about “un-empowering” the major OTTs. It’s hard for me to see how that would be favored in the kind of revenue-focus shift we’re talking about. It’s very difficult to see how the problem of revenue is solved by Web3, even if you assume that somehow Web3 had a lock on cryptocurrency payments, which it would not. The metaverse, on the other hand, is potentially at the heart of the shift.

A metaverse is first and foremost a new kind of experience, which is attractive for its novelty alone, and also for the fact that since it is new, there’s an opportunity to frame a metaverse as a subscription service. In fact, it would be pretty easy to establish a metaverse revenue model that included both user subscriptions and ad revenues, making a metaverse a bridge between ad sponsorship and subscription revenue for players like Meta.

Since a metaverse could also host collaboration, customer support, user communities built around products, and other business-related, semi-social, activities, it’s also likely that a lot of different players with a lot of different metaverse targets will emerge. That’s already happening to an extent.

Going down a layer in terms of technology, the impact of a shift toward a subscription revenue model would depend largely on who does the shifting and what their own primary service architecture looks like. The metaverse, as I’ve noted in past blogs, would tend to foster edge computing and a metro-mesh network model. Both would obviously be metro-centric, and thus a metaverse shift would track the general trends I outlined in the referenced blog.

It’s possible that a shift toward subscription services could encourage the network operators to think about getting into higher-level services themselves, either as a retail offering or by creating wholesale components designed to encourage OTTs to frame new services on these operator-created elements, as AT&T has proposed. A few operators hope this could even create an opportunity to offer “billing-as-a-service” to OTTs, particularly if some of the new services were usage priced rather than flat-rate.

The interest of content producers in direct subscription services might also create an opportunity for operators. A few giant OTTs could be expected to create their own software/hosting framework, but if the market fragments, many of the potential players are too small to make the investment, and the total competitive overbuild would be too high to bear. Could operators use this as an opportunity to build up mid-layer features for composition into services like streaming? Yes. Will they? That’s a question we may see answered in 2022.