VMware Ups Its Telco Cloud Game

VMware is stepping up in telco cloud. There’s no question that the company is drawing its positioning line in the sand, and there’s no question that it has the market position to make that line into a promise to buyers and a threat to competitors. The only question is how the moves will balance out; how they will direct VMware along what’s become a pretty tangled path. The biggest issues are the fact that “telco cloud” has become more than just hosting network functions, and that partnerships between public cloud providers and telcos threaten whether there’s really any “telco cloud” on the horizon at all. In short, the telco market has been a minefield for cloud vendors, and so VMware has a narrow path to navigate if it wants to succeed.

The most important thing about “telco cloud” is its uncertainty. So far, telcos have shown relatively little interest in deploying their own cloud infrastructure, and many have in fact fobbed off the mission to public cloud partnerships. It’s also clear that vendors like VMware would derive relatively little benefit from a “telco cloud” that was confined to a single application like network function hosting, even if there were independent deployments by telcos. Telco cloud is really edge, and so we have to look at everything that targets it in the light of its impact on (and how it’s impacted by) edge computing.

The key element of the VMware announcement is the VMware-provided Radio/RAN Intelligent Controller or RIC. The RIC is the key component of an O-RAN implementation, the thing that differentiates O-RAN from the more monolithic 3GPP 5G RAN model. For VMware to deploy its own version is a big deal; it gives them not only a differentiator relative to firms who use other RICs, but also an opportunity to tie the RIC, and O-RAN, more tightly to their cloud software. It seems clear, based on this, that VMware sees (correctly) that 5G will be an early driver for deployment of hosting resources at the edge, and thus is likely to be the first credible driver for edge computing deployment.

The thing about O-RAN, though, is that unless it does lead to edge computing, it becomes nothing more than an open model for hosting some arcane 5G RAN elements. If, on the other hand, it could define and jump-start edge deployment, it could be market-changing. Even so, O-RAN-centric positioning is just one of three possible ways of approaching the edge, so let’s look briefly at the other two in order to compare their effects, and see whether other models might threaten VMware.

The second edge model is the cloud-extension model, which says that edge computing is cloud computing hosted at the metro level rather than in regional data centers. The reason for the metro extension is to control latency, so edge computing is low-latency cloud computing based on this model.

The third edge model is the metro fabric model. This is the edge approach that most network equipment vendors give a nod to. Because edge computing is where higher-level network functions are hosted, edge locations need to have a different kind of networking, one less focused on simple aggregation and more on meshing of service components.

While I believe that VMware’s announcement (referenced above) demonstrates that it’s in the 5G-and-virtual-function-hosting camp with regard to the edge, it also seems to work hard to cover the other models. VMware Telco Cloud Platform-Public Cloud provides integration with public cloud services, and Telco Cloud Platform-Edge accommodates the metro-fabric approach by being largely network-agnostic. That makes the third of our models appear to be VMware’s biggest risk, but the biggest risk to VMware and everyone else is that what emerges in the end is a combination of the three models.

Back when Juniper announced its Cloud Metro strategy earlier this year, I blogged about what the future edge/metro should look like (as a prelude to assessing Juniper’s announcement). In effect, it’s a virtual data center made up of metro host points connected in a low-latency mesh, combined with the architectural tools to manage the infrastructure, deploy edge applications based on some architectural framework, and manage the resulting conglomerate. Metro/edge rebuilds networks around the role that metro necessarily plays in enhanced service hosting, and service features. It’s this vision that VMware and others somehow have to support.

You can see the three models of edge emerging in my virtual-data-center definition, and you can also perhaps see why 5G is a complicated driver for edge computing. The vendors like Ericsson and Nokia can offer a complete 5G strategy, but they have to reference external specifications and vendor/product combinations to expand this to edge computing. There are no convincing external specifications, no edge architectures like there is O-RAN for 5G. Vendors like VMware can reference O-RAN and describe edge strategy (as they do in the announcement) but they leave the network piece open, which means that network vendors could directly aim at the VMware position, and that any player could align with a network vendor, or define a metro network model.

What would VMware have to do to defend its Telco Cloud strategy? The VMware blog post I reference above lays out the elements of their strategy fairly well, but it doesn’t position it well, so that would be a good starting point. Most vendors who target the telecom space have a tendency to sell rather than market in their material. There’s nothing about benefits, value propositions. It’s about speeds and feeds, or at least about functional components. This reflects a view that the buyer community (the telcos) are actively seeking the solution the vendor is offering, and only needs to understand the pieces in the offering.

In the case of telco edge computing, I’d argue that events disprove this position. Telcos are actively seeking, at least for the moment, to avoid edge computing in the form of telco cloud deployments. They hope to realize 5G hosting through the public cloud providers, as I’ve already noted. My work with both telcos and enterprises over the years has shown me that successful sales is a matter of managing the trajectory that defines how “suspects” become “prospects” and then “customers” in turn. It’s tough to do that if you don’t address what those “suspects” are actually doing.

It wouldn’t be difficult for VMware to position their stuff right, to manage that critical trajectory, but while I can say that with a fair amount of confidence, I have to admit that the lack of difficulty hasn’t enabled other vendors to do much better. There seems to be a sell-side bias toward under-positioning and failing to consider the suspect-to-customer evolution.

What’s going to be more difficult for VMware is that its credibility depends on platform dominance, and it’s not really articulating an edge platform strategy. 5G hosting could pull through the edge providing that the edge architecture supported both future applications like IoT, and 5G. Right now, VMware is essentially saying that edge hosting is the same as cloud hosting, which if true takes a big step toward invalidating their own value proposition. If it’s not true, then it’s incumbent on VMware to explain what edge hosting really is.

VMware is, IMHO, the best edge-opportunity-positioned of the cloud-software players. They have great assets, and great insights. There’s a big opportunity in the metro/edge space and they seem determined to grab a major piece of it. We’ll watch to see how they do in 2023.

Comcast’s XClass TV Might Be a Game Changer

Streaming is the current revolution in video delivery. OK, it may get its biggest boost from the fact that it can deliver video to smartphones and other devices over any good Internet connection, but it also has competitive benefits…and risks. Comcast, the biggest cable operator in the US, may be taking the competitive gloves off with its XClass TV, and that may elevate the competitive impact of streaming on market importance, maybe even higher than the impact of mobile delivery.

Video is perhaps the most important consumer entertainment service, and its roots lie in “cable TV”, which is dominated by live programming with optional recording of shows for time-shifted viewing. Because the original model depended on “linear RF” delivery, video has tended to require specialized media—CATV or fiber cabling. That model doesn’t work for mobile service, and it obviously limits the scope of video service to places where an operator could expect enough customers to deploy wireline infrastructure.

The streaming video model, by delivering video over the Internet, opened up video access to anyone with high-quality broadband. Yes, that meant mobile devices, but it also means “out-of-region” options for the previously regionally-constrained players, like Comcast. AT&T had previously taken a shot at streaming video, and they’ve revamped their streaming strategy, but it didn’t seem to convince wireline video competitors that they also needed to think about their own streaming strategy. Comcast may do that.

A shift to a streaming model would, of course, favor streaming providers, and that’s why competitive pressure created by moves like Comcast’s could kill off linear TV, at least in its CATV/fiber form. Over-the-air programming in major metro areas isn’t likely to be impacted in the near term, though there would be benefits in salvaging the spectrum used for TV broadcast. The problem is that many urban viewers rely on over-the-air, and it’s even popular in some suburbs. There’s a public policy question that would have to be addressed to eliminate linear TV, and I don’t think that will happen soon. However, we could expect other impacts.

One likely possible impact may sound surprising; a shift to streaming video would likely promote municipal fiber. Home broadband needs some strategy for live TV, and linear is not only too expensive an option for most muni projects to handle, the licensing of the programming would be prohibitive. If we had a growing number of national live-stream-TV players, then governments that wanted to offer fiber could be sure that their citizens could have their TV. It’s even possible that they could cut deals with providers for a lower price, perhaps to help offset deployment costs.

But the second impact could complicate things. We’re already seeing networks field their own streaming services, and that could ultimately lead to their reluctance to let their programming be incorporated in a multi-network streaming service of the type that we’re familiar with these days. This could end up either driving up the cost of those cable-like-streaming players’ services, or even driving some of them out of the market. The cost of signing up for every network’s streaming service would likely be higher than today’s multi-network service cost.

The biggest impact we could expect from moves like Comcast’s is that very per-network balkanization of streaming material I just mentioned. It’s more profitable for a strong network to deploy its own streaming services than to deal with a dozen or more aggregators. The streaming model liberates the content owners, and let’s not forget that Comcast owns NBC Universal. Recall that NBC recently had a tiff with YouTube TV over pricing, and over whether Peacock (NBC’s streaming service) should be incorporated en masse into YouTube TV. It’s hard not to see this as a bit of content chest-butting, and linked to the Comcast decision.

The big question is whether Comcast will take things beyond XClass TVs and into dongles or even just standalone streaming. Comcast said (in the referenced article) that XClass TV is their first initiative to sell their streaming service out of their own area, and without Comcast broadband underneath. Linking their out-of-region stuff to TV sales slows down the impact, which is surely what Comcast intends. If the whole notion lays an egg, it’s easy to pull out without compromising a later attempt to go national through some other mechanism. It also lets Comcast gauge the impact of their move on other networks, and competitors.

If we were to see a decisive shift to every-network-for-themself thinking, it bodes ill for the smaller networks who will have a problem raising enough subscribers to field an offering. Might big aggregators of network content then end up depending on those smaller and more specialized players.

Another question that the Comcast moves raises is “What about Amazon?” Amazon’s Prime Video service is widely used, but it doesn’t include live content except through “Channels” which are relationships with live-TV network providers. Might Amazon step up? They already produce multiple TV series of their own, releasing all the episodes in a season at once. Might they adopt a more traditional “it’s-on-at-this-time” model? Might they become an aggregator of smaller networks who can’t stream on their own?

And if Comcast follows XClass TV with XClass Dongle, doesn’t that make them a player like Google, who offers both YouTube TV, Android TVs, and Chromecast for Google TV? The breadth of Google’s play could well be at the root of their dispute with Roku, a dispute that threatens to take YouTube TV off Roku devices (and presumably onto Google’s own dongles).

XClass could be trouble for competitors, and a driver of change in the industry. It could also create a challenge for Comcast. The great majority of its customers are watching linear TV, and the cable industry overall is still largely focused on that delivery model for video. Changing things would mean retrofitting not only the cable plant, but also the customer premises cable modem equipment. It may be that Comcast is signaling that seemingly dire course is now definitely the path of the future. If that’s the case, then it decides the DOCSIS evolution debates, and signals the death of linear TV except (perhaps) in the broadcast-and-antenna world. Streaming, in any event, is clearly winning.

What the Recognition of a New Cloud Hardware Model Means

Specialized microprocessors for smartphones aren’t anything new, but we’re starting to see (from Google, for example) custom processors for smartphones from the phone vendors themselves. Apple has introduced their own microprocessor chip for PCs, and Amazon has used its own customized chip for some cloud services since 2018. NVIDIA GPUs have been available in the cloud for some time, too. Now the story (from Wall Street) is that Alibaba is going to launch a line of chips for use in cloud computing servers. Are we seeing the end of x86 chip dominance, or maybe even the age of merchant microprocessors?

Microprocessors are fundamental to the cloud, of course. If we look at the cloud as a simple utility computing model, with cloud IaaS competing with on-premises VMs, we might expect that having a specialized chip could offer a cloud provider an advantage in cost/performance that could translate into a competitive advantage. But how much of that “if-we-look-at” statement is really true? Could Alibaba have other reasons for wanting their own chip?

Both the AWS and Alibaba chips are based on an ARM CPU (Apple’s M1 chip is also based on the ARM architecture), which is a totally different processor architecture than the classic x86 chips of Intel and AMD. That means that binaries designed for x86 (and/or x64) won’t run on ARM-based chips. If we were to presume that cloud computing was all about “moving existing applications to the cloud”, this would be an almost-insurmountable problem, because most third-party software is delivered in x86 form only. But that’s not what the cloud is about, and we have to start with that truth.

Cloud computing today is dominated by GUI-centric development, either for social-network providers or for enterprises building front-ends to legacy applications that will themselves stay in the data center. This is new coding, and in almost all cases is coding done in a high-level language, rather than in “assembler” or “machine language” that is specific to the microprocessor architecture. You can get a compiler for most popular high-level languages for ARM chips, so new development fits the alternative chip architecture. In fact, GUI-specific apps seem to run much faster on ARM chips, largely because the x86 architecture was designed for general-purpose computing rather than real-time computing, which is what most cloud development really is.

The reason for ARM’s real-time benefit is that the term is an acronym for “Advanced RISC Architecture”, where RISC means “Reduced Instruction Set Computing”. RISC processors are designed to use simple instructions that have a very low execution overhead. There are no complex instructions, which can mean that doing some complicated things will require a whole series of instructions that on an x86 machine would be done in one. Real-time processes usually don’t have those “complicated things” to do, though, and so ARM is a great model for the front-end activities that really dominate the cloud today. It’s also great for mobile phones, which is why RISC/ARM architectures dominate there.

None of this should be a surprise, but perhaps the “move-to-the-cloud” mythology got the best of everyone. NVIDIA is trying (with regulatory push-back) to buy Arm (the company), and I think the reason is that they figured out what was really going on in the market. The majority of new devices, and the majority of new cloud applications, will be great candidates for ARM (the processor). So does this mean that Alibaba is doing the right thing? Yes. Does it mean that it will gain share on Amazon and other cloud giants? No.

Obviously, Amazon is already offering ARM hosting, but so far the other major cloud providers don’t. That’s very likely to change; some sources on Wall Street tell me that both Microsoft and Google will offer ARM/RISC processor instances within six months. Alibaba’s own move would be likely to generate more interest by the second and third of our cloud giants, but I suspect that Amazon’s recent successes with ARM would be enough. There are some extra issues that a cloud provider has to face if they offer ARM hosting, but they’re clearly not deal-breakers.

The most significant issue with ARM hosting is the web services library that the operator offers. If the web services are designed to be run locally with the application, then they’d have to be duplicated in ARM form in order to be used. It’s possible to run x86 on ARM via an emulator, but performance is almost certain to be an issue.

A close second, issue-wise is the possible confusion of cloud users. Some binaries will work on ARM and others on x86/x64, and you can’t mix the two. In cloudbursting situations, this could present issues because data centers rarely have ARM servers, so the data center can’t back up an ARM cloud. Similarly, an ARM cloud can’t back up the data center, and you can’t scale across the ARM/x86 boundary either. All this means taking great care in designing hybrid applications.

Another issue is economy of scale, and this issue is hard to judge because of edge computing. A major cloud provider could almost certainly offer enough ARM hosts to achieve good economy of scale, within a couple percent of what they have overall. However, edge computing necessarily creates smaller resource pools and so further dividing an edge pool could threaten edge economics and ARM benefits. The question is whether edge applications, which are likely real-time in nature, could be so much better served with ARM hosts that the edge would go all-ARM.

The ARM question is really an indicator of a major shift in cloud-think. We’re finally realizing that what runs in the cloud is not only something that was written for the cloud and not moved from the data center, but also something that may have little in common with traditional data center applications. That’s why ARM/RISC systems, GPUs, and perhaps other hardware innovations are coming to light; it’s a different world in the cloud.

The edge is even more so, and there’s still time to figure out what an optimum edge would look like. That’s a useful step in framing out an architecture model for edge applications, something I’ve been advocating for quite a while. The trick is going to be preventing a debate over processor architecture from distracting from the architectural issues. There’s more to software than just the compatibility of the binary image, and I think that the hardware insights we’re now seeing will be followed by software architecture insights, likely as soon as early next year.

Just How Real Could our Virtual Metaverse Be?

Facebook is said to be considering renaming itself to claim ownership of the “metaverse”, which has led to many (especially those who, like me, are hardly part of the youth culture) wonder just what that means. The fact is that the metaverse is important, perhaps even pivotal, in our online evolution. It may also be pivotal in realizing things like the “contextual” applications I’ve blogged about.

At the high level, the term “metaverse” defines one or many sets of virtual/artificial/augmented (VR/AR) realities. Games where the player is represented by an avatar are an example, and so are social-network concepts like the venerable Second Life. Since we’ve had these things for decades (Dungeons and Dragons, or D&D, was a role-play metaverse and it’s almost 50 years old) you’d be right thinking that new developments have changed the way we think about this high-level view, and you’d be right.

Facebook’s fascination with the metaverse seems strongly linked with social media, despite the company’s comments that it views the metaverse as a shift. Social media is an anemic version of a virtual reality, something like the D&D model, that relied on imagination to frame the virtual world. Metaverse presumes that the attraction of social media could be magnified by making that virtual world more realistic.

Many people today post profile pictures that don’t reflect their real/current appearance. In a metaverse, of course, you could be represented by an avatar that looked any way you like. Somebody would be selling these, of course, including one-off NFT avatars. There would also be a potential for creating (and selling) “worlds” that could be the environment in which users/members/players interacted. You can see why Facebook might be very interested in this sort of thing, but that doesn’t mean it would be an easy transformation.

One issue to be faced is simple; value. We’ve probably all seen avatars collaborating as proxies for real workers, and if we presumed a metaverse could implemented properly, that could likely be done. The question is whether businesses would value the result. Sure, I could assume that a virtual-me wrote on a virtual-whiteboard and other virtual-you types read the result through artificial reality goggles, but would that actually increase our productivity? Right now, we’re all talking as though metaverse was an established technology, and positing benefits based on the most extensive implementation. Is that even possible?

Metaverse today demands a high degree of immersion in a virtual reality (like a game) and a high-level integration of the real world with augmentation elements in augmented reality scenarios. Most aficionados believe that metaverses require AR/VR goggles, a game controller or even body sensors to mimic movements, and a highly realistic and customized avatar representing each person. As such, a metaverse demands a whole new approach to distributed and edge computing. In fact, you could argue that a specific set of principles would have to govern the whole process.

The first principle is that a metaverse has to conform to its own natural rules. The rules don’t have to match the real world (gravity might work differently or not at all, and people might be able to change shapes and properties, for example) but the rules have to be there, even a rule that says that there are no natural rules in this particular metaverse. The key thing is that conformance to the rules has to be built into the architecture that creates the metaverse, and no implementation issues can impact the way that the metaverse and its rules are navigated.

The second principle is that a metaverse must be a convincing experience. Those who accept the natural rules of the metaverse must see those rules in action when they’re in the metaverse. If you’re represented by an avatar, the avatar must represent you without visual/audible contradictions that would make the whole metaverse hard to believe.

Rule three is that the implementation of a metaverse must convey the relationships of its members and its environments equally well to all. This is the most difficult of the principles, the one that makes the implementation particularly challenging. We might expect, in the real world, to greet someone with a hug or a handshake, and we’d have to be able to do that in the metaverse even though the “someones” might be a variable and considerable geographic distance from each other.

Rule one would be fairly easy to follow; the only issues would emerge if the implementation of a metaverse interfered with consistent “natural-for-this-metaverse” behavior. It’s rules two and three, and in particular how they interact in an implementation, that creates the issue.

If you’ve ever been involved in an online meeting with a significant audio/video sync issue, or just watched a TV show that was out of sync, you know how annoying that sort of thing is, and in those cases it’s really a fairly minor dialog sync problem. Imagine trying to “live” in a metaverse with others, where their behavior wasn’t faithfully synchronized with each other, and with you. Issues in synchronization across avatars and the background would surely compromise realism (rule two) and if they resulted in a different view of the metaverse for its inhabitants, it would violate rule three.

Latency is obviously an issue with the metaverse concept, which is why metaverse evolution is increasingly seen as an edge computing application. It’s not that simple, of course. Social media contacts are spread out globally, which means that there isn’t any single point that would be “a close edge” to any given community. You could host an individual’s view of the metaverse locally, but that would work only as long as there were no other inhabitants who weren’t local to the same edge hosting point. If you tried to introduce a “standard delay” to synchronize the world view of the metaverse for all, you’d introduce a delay for all that would surely violate rule two.

An easy on-ramp to a metaverse to avoid a problem with latency would be to limit the kinds of interactions. Gaming where a player acts against generated characters is an example of this. To avoid latency problems when players/inhabitants interact with each other would require limiting interactions to the kind that latency wouldn’t impact severely. We may see this approach taken by Facebook and others initially, because social-media users wouldn’t initially expect to perform real-world physical interactions like shaking hands. However, I think this eventually becomes a rule two problem. That would mean that controlling latency could end up as a metaverse implementation challenge.

One possible answer to this would be to create “local metaverses” that would represent real localities. People within those metaverses could interact naturally via common-edge technology. If someone wanted to interact from another locality, they might be constrained to use a “virtual communicator”, a metaverse facility to communicate with someone not local, just as they’d have to in the real world.

Another solution that might be more appealing to Facebook would be to provide rich metaverse connectivity by providing rich edge connectivity. If we supposed that we could create direct edge-to-edge links globally, each of which could constrain latency, then we could synchronize metaverse actions reasonably well, even if the inhabitants were distributed globally. How constrained latency would have to be is subjective; gaming pros tell me that 50 ms would be ideal, 100 ms would be acceptable, and 200 ms might be tolerable. The speed of light in fiber is roughly 128 thousand miles per second, so a hypothetical fiber mesh of edge facilities could deliver something in just over 90 ms anywhere on the globe, if there were no processing delays to consider.

The obvious problem with this is that a full mesh of edge sites would require an enormous investment. There are roughly 5,000 “metro areas” globally, so fully meshing them with simplex fiber paths would require almost 16 million fiber runs (8 million full-duplex connections). If we were to create star topologies of smaller metro-to-larger-metro areas, we could cut the number of meshed major metro areas down to about 1,000, but that only gets our fiber simplex count down to about a million paths. The more we work to reduce the direct paths, the more handling we introduce and the more handling latency is created.

Obviously some mixture of the two approaches is likely the only practical solution, and I think this is what Facebook has in mind in the longer term. They may start with local communities where latency can be controlled and allow rich interaction, then see where they could create enough edge connectivity to expand community size without compromising revenue.

Telcos and cloud providers, of course, could go further. Google and Amazon both have considerable video caching (CDN) technology in place, and they could expand the scope of that to include edge hosting. Same with CDN providers like Akamai, and social media providers like Facebook might hope that one of these outside players invest in heavily connected edge hosting, so they can take advantage of it.

Technology isn’t the problem here, it’s technology cost. We know how metaverse hosting would have to work to meet our three rules, but we don’t know whether it can earn enough to justify the cost. That means that the kind of rich metaverse everyone wants to talk and write about isn’t a sure thing yet, and it may even take years for it to come to pass. Meanwhile, we’ll have to make do with physical reality, I guess.

Fixing the Internet: Nibbles, Bites, Layers, and Parallels

The recent Facebook outage, which took down all the company’s services and much of its internal IT structure, certainly frustrated users, pressured the company’s network operations staff, and alarmed Internet-watchers. The details of the problem are still sketchy, but there’s a good account of how it evolved available from Cloudflare. Facebook said that human error was at the bottom of the outage, but the root cause may lie far deeper, in how we’ve built the Internet.

Most networking people understand that the Internet evolved from a government-research-and-university project. The core protocols, like TCP and IP, came from there, and if you know (or read) the history, you’ll find that many of the aspects of those early core protocols have proven useless or worse in today’s Internet. Some have been replaced, but others have evolved.

If something proves to be wrong, making it right is the obvious choice, but it’s complicated when the “something” is already widely used. When the Worldwide Web came along in the 1990s, it created the first credible consumer data service, and quickly built a presence in the lives of both ordinary people and the companies and resources they interact with. That success made it difficult to make massive changes to elements of those core protocols that were widely used. We face the consequences every day.

Most of the Internet experts I talk with would say that if we were developing protocols and technologies for the Internet today, from scratch, almost all of them would be radically changed. The inertia created by adoption makes this nearly impossible. Technology and Internet business relationships are interwoven with our dependence on the Internet, and to liken the Internet to a glacier understates the reality. It’s more like an ice age.

BGP appears to be at the root of the Facebook problem, and most Internet routing professionals know that BGP is complicated and (according to many) almost impossibly so. The workings of the Domain Name Service (DNS) that translates commonly used URLs into IP addresses, also played a part. BGP is the protocol that advertises routes between Internet Autonomous Systems (ASs), but it’s taken on a lot of other roles (including roles in MPLS) over time. It’s the protocol that many Internet experts say could benefit from a complete redesign, but they admit that it might be totally impossible to do something that radical.

It’s demonstrably not “totally impossible”, but it may be extraordinarily complicated. SDN, in its “true” ONF OpenFlow form, was designed to eliminate adaptive routing and centralize route control. Google has used this principle to create what appears to be an Autonomous System domain of routers but is actually an SDN network. The problem is that to get there, they had to surround SDN with a BGP proxy layer so that the Google stuff would work with the rest of the Internet. Could another layer of SDN replace that proxy, letting AS communications over some secure channel replace BGP? Maybe.

Then there’s DARP, not the defense agency or DARPA that (in its earlier ARPA form), but Distributed Autonomous Routing Protocol. DARP was created by startup Syntropy, who has a whole suite of solutions to current Internet issues, including some relating to fundamental security, fostering what’s called “Web3”. DARP uses Syntropy technology to build a picture of Internet connectivity and performance. It’s built as a kind of parallel Internet that looks into/down-on the current Internet and provides insights into what’s happening. However, it can also step in to route traffic if it has superior routes available. This means the current Internet could be made to evolve to a new state, or that it could use DARP/Syntropy information to drive legacy node decisions on routes.

The security issues of the Internet go beyond the potential issues like BGP, of course. Many feel that we need to rethink the Internet in light of current applications, and its broad role in our lives and businesses. The Web3 initiative is one result of that. It’s explained HERE and hosted HERE, and it has the potential for revolutionizing the Internet. Web3 has a lot of smarts going for it, but working against it is the almost-religious dedication many in the Internet community have to the protocol status quo. The media also tends to treat anything related to changing the Internet as reactionary at best and sinister on the average.

The scope of Web3 is considerable: “Verifiable, Trustless, Self-governing, Permissionless, Distributed and robust, Stateful, and Native built-in payments,” to paraphrase my first link content. There’s a strong and broad reliance on token exchanges, including blockchain and even a nod at cryptocurrency. The first of my two references above offer a good explanation, and I don’t propose to do a tutorial on it here, just comment on its mission and impact.

There is little question that Web3 would fix a lot of the issues the Internet has today. I think it’s very likely that it would create some new issues, simply because some players with something as enormous and important as the Internet is going to respond to change as a threat, and will try to game the new as much as they have the old. The fact that Web3 has virtually no visibility among Internet users, and only modest visibility within the more technical Internet/IP community, demonstrates that the concept’s biggest risk is simply becoming irrelevant. People will try band-aids before they consider emergency care.

That’s particularly true when the band-aid producers are often driving the PR. Network security is now a major industry, and the Internet is creating a major part of the risk and contributing relatively little to eliminating it. We layer security on top of things, and that process creates an endless opportunity for new layers, new products, new revenues. This has worked to vendors’ benefit for a decade or more, and they’re in no hurry to push an alternative that might make things secure in a single stroke. In any event, any major change in security practices would tend to erode the value of being an incumbent in the space. Since most network vendors who’d have to build to Web3 are security product incumbents, you can guess their level of enthusiasm.

They have reason to be cautious. Web3 is so transformative it’s almost guaranteed to offend every Internet constituency in some way. The benefits of something radically new are often unclear, but the risks are usually very evident. That’s the case with Web3. I’ve been involved in initiatives to transform some pieces of the Internet and its business model for decades, and even little pieces of change have usually failed. A big one either sells based on its sheer power, or makes so many enemies that friends won’t matter.

Doing nothing hasn’t helped Internet security, stability, or availability, but doing something at random won’t necessarily help either, and in fact could make things worse. I see two problems with Web3 that the group will have to navigate.

The first problem is whether parallel evolution of Internet fundamentals can deliver more than layered evolution. When does the Internet have too many layers for users/buyers to tolerate? When does a cake have too many layers? When you can’t get your mouth open wide enough to eat it. The obvious problem is that a single big layer is as hard to bite into as a bunch of smaller ones. Things like the Facebook problem should be convincing us that our jaws are in increased layer jeopardy, and it may be time to rethink things. The trick may be to make sure the parallel-Internet concepts of Web3 actually pay off for enough stakeholders, quickly enough, to catch on, rather than die on the vine.

The second problem is the classic problem with parallelism, which is how much it can deliver early on, particularly to users who are still dependent on traditional Internet. It seems to me that Web3 could deliver value without converting a global market’s worth of user systems, but more value when most browsers embraced it. Is the limited value enough to sustain momentum, to advance Web3 to the point where popular tools would support it? I don’t know, and I wonder if that point has been fully addressed.

My view here is that Web3 is a good model, but the thing that keeps it from being a great model is that it bites off so much that chewing isn’t just problematic, it’s likely impossible. What I’d like to see is something that’s designed to add value to security and availability, rather than something that tries to solve every possible Internet problem. The idea here is good, but the execution to me seems just too extreme.

VMware Prepares for Life on its Own

VMware, like most vendors, has regular events designed to showcase their new products and services, and VMware’s VMWorld 2021 event is such a showcase. The stories the company told this year are particularly important given that the separation of VMware and Dell is onrushing, and everyone (including Wall Street and VMware’s customers) are wondering how the firm will navigate the change. It’s always difficult to theme out an event like this, because there are usually announcements touching many different technologies, so trying to lay out main themes is important.

The first such “main theme” is the cloud, meaning specifically hybrid and multi-cloud. VMware has a very strong position in the data center, earned largely by its early leadership in virtual machines. The cloud has pretty clearly moved on to containers, and in any event, containers are much easier to deploy because they carry their application configuration information with them. VMware has an excellent container framework in Tanzu, one of if not the best in the industry, but it’s been dancing a bit with harmonizing container futures with VM antecedents.

What seems to be emerging now is fairly straightforward. If we assume that applications have to deploy in the cloud and data center, and that “the cloud” means two or more clouds (multi-cloud), then there is a strong argument that an enterprise with this sort of deployment model could well want to use VMs in both the data center and in all their public clouds, and then use Tanzu to host containerized applications and tools in those VMs. This is the Tanzu-on-vSphere That would create a unified container deployment environment across everything, based on Tanzu, and support vSphere VM applications as usual.

The positioning anchor for all of this seems to be application modernization (AppMod), which is smart because the concept includes the creation of cloud front-ends for legacy applications, not just the “moving to the cloud” model that I think is out of touch with current software evolution reality. Tanzu, as the company indicated in its conference, is the future of VMware, and that realization is perhaps the most critical point in their overall positioning. The company seems committed to shedding its relentless VM focus, and that’s essential if they’re to remain relevant.

My only concern with the Tanzu positioning is that there are a bewildering set of products cited, and it’s not easy to establish the role of each and their relationship to any specific mission. That’s particularly noticeable in networking, which VMware spreads across at least three of their major announcement topical areas. VMware’s NSX virtual networking strategy is perhaps the original offering in the space (they got the first version when they acquired Nicira, who was the first big player in the space), and I think it would have been smart for VMware to try to focus networking on NSX just as they’re trying to focus hosting on Tanzu.

VMware has been active in the telco vertical for years, but their presentation of their telco products lump them with SD-WAN in the “Edge” category. If you were to presume that “edge” means “edge hosting”, then it would be logical to say that Tanzu belongs there, and in fact the strongest possible positioning for VMware in the telco space would be based on Tanzu, with support from specialized telco offerings (VMware Telco Cloud Infrastructure and Telco Cloud Platform) and virtual networking (NSX and SD-WAN). They do claim to support the evolution from VNFs to CNFs, but their solution brief for telcos doesn’t mention Tanzu and thus doesn’t link in their main enterprise strategy, nor their edge computing.

At VMWorld, they announced the “Cloud Native Box”, a joint activity with partner Accenture. This is what the VMware blog said: “The Cloud Native Box is a market-ready solution that supports multiple use cases with unlimited potential for specific deployment models, from core to edge and private networks solutions, depending on the business demands of each company. As a pre-engineered solution it offers proven interoperability among radio components, open computing, leading edge VMware Telco Cloud Platform (TCP) and a plethora of multivendor network workloads, with unparalleled lifecycle management automation capabilities.” It seems pretty clear that Cloud Native Box is aimed at resolving the telco issues I just cited, but how it does that can’t be extracted from the VMware blog, so we’ll have to wait to assess what the announcement will mean.

From when VMware acquired Pivotal and their Cloud Foundry cloud-native development and deployment tools, there’s been a bit of a struggle to avoid having two persistent container/cloud strategies in parallel. The show indicates that VMware is making progress integrating Pivotal Cloud Foundry with Tanzu, and the challenge hasn’t been eased by the loosey-goosey definition of “cloud native” that’s emerged. Tanzu Application Services is how Cloud Foundry stuff is currently packaged, but many see the “Tanzu” name more as an implied direction than as an appropriate label for current progress in consolidating the two container frameworks.

The difficulty here is that Tanzu Application Services is really a cloud-native development and deployment environment, and Kubernetes is really container orchestration. There’s real value in the old Cloud Foundry stuff, beyond the installed base, and of course VMware wants to increase the customer/prospect value in the integration and not toss everything to the wolves. They’re not there yet, but they’re making progress.

I think that there are two factors that have limited VMware’s ability to promote its stuff effectively. One is the classic challenge of base-versus-new. VMware has a large enterprise installed base who obviously know VMware-speak, and retaining that base is very important, particularly to the sales force. Not only does that guide positioning decisions, it also influences product development, aiming it more on evolution (of the base, obviously) than on new adoption. That’s reasonable in the enterprise space because the base is large, but it doesn’t serve well in the telco world.

The second factor is a positioning conservatism that I think developed out of the Dell/VMware relationship. Obviously, the two companies need to be somewhat symbiotic, which means that they can’t position in a way that essentially makes them concept competitors. Now that there’s going to be a spin-out-or-off-or-whatever, VMware will have to stand on its own, but until then it’s important that neither company rocks the collective boat too much.

Any major business change creates challenges, and either major M&A or spin-outs are surely major business changes. Executives are always preoccupied with these shifts, and in the case of VMware, so are many critical employees. Some look forward to being independent, thinking they were constrained by Dell, which they were. Some fear it, concerned that Dell might shift its focus to a broader mix of platform software, at VMware’s expense, which they likely will. In short, nobody really knows how this is going to turn out, and what the best strategy for VMware will be for the future.

Whatever it is, it needs to address the issues I’ve cited above. In fact, it needs to address them more than ever, because optimizing the favorable implications of the spin-out and minimizing the risk of the negative starts with having a story that’s not just coherent and cohesive, but also exciting beyond the VMware base. Up to now, getting that story has proved problematic for VMware, and they can’t afford to let those past positioning difficulties contaminate their future.

Google’s Distributed Cloud Could Define the Edge and Redefine the Cloud

Everyone in telecom is surely aware of the push of public cloud providers into the telco world. Amazon and Microsoft have long offered telcos hosting of elements of 5G and other “telco cloud” applications. Google now wants to get into the game, or rather get into it on a more serious basis. There are a lot of good reasons why that might work, and some that it could be more difficult than Google expects.

I blogged about Google’s position in the telco space before, noting that the telcos have long been a bit enamored of Google as a partner. Google already runs what’s arguably the largest network built from hosted components, and while they’ve been a distant third in the cloud overall, and even with telcos, they are still recognized as a major innovator. What they announced this week was Google Distributed Cloud, a strategy that not only addresses the telco 5G hosting that Google’s competitors want a piece of, but the broader area of distributed computing, even for enterprises.

What GDC (I hate typing long names, so forgive my use of the acronym!) does is abstract the platform-as-a-service feature set of Google Cloud from the hosting. It’s not unlike early initiatives from Amazon and Microsoft to host cloud elements on the customer premises, but it has a broader intended application. You can run GDC on anything, including Google’s cloud hosts, edge computing, other clouds, premises or partner data centers, you name it. This is a really important initiative, particularly for the telcos.

The biggest problem that telcos face in dipping their toes into hosted service functions via the public cloud, is avoiding getting locked in. Most of the public cloud offerings for telecom are specialized to the provider, to the point where it would be complicated for a telco to move to another cloud or to decide to pull cloud-hosted stuff back inside to their own telco cloud infrastructure. GDC, by creating a portable software platform that can be hosted on nearly anything, eliminates that risk.

Google says that its Anthos multi-Kubernetes-cluster strategy is at the core of GDC. Operationally, it unifies all the hosting environments, and it appears for the moment to be the way that GDC can support multi-cloud. While the statement of multi-cloud support is explicit in GDC’s diagrams, the details of how that would work, given that “multi-cloud” means multiple cloud providers, is sparse. Thus, I can’t say for sure that the GDC platform itself could be hosted on another provider’s IaaS, though it seems possible on the surface. Operationally, Anthos would unify whatever you could run in other clouds with the rest of GDC.

What makes GDC clever is that it’s the first true functionally layered public cloud model; the application’s view of the cloud is created by the platform, which can then be hosted in multiple ways. That means the applications run wherever GDC can be hosted, and that means that telecom applications like 5G RAN and Core can be hosted on GDC and tightly integrated with edge computing applications that would be coupled (explicitly or just to control latency) with those 5G elements.

The Google Distributed Cloud Edge offering, the one that seems directly aimed at telcos, is a managed service that presumably could be hosted on Google’s own cloud or edge elements, or on the operators’ premises. So far, at least, Google doesn’t appear to be bundling 5G functions with its service, so operators could host any compatible 5G functionality. Google has been working closely with vendors like Ericsson and Nokia on this, though.

For enterprises, the primary offering is Google Distributed Cloud Hosted, which supports both enterprise edge and the data center. This model of GDC means that you could build applications to a single model and run them anywhere. The run-anywhere capabilities of GDC would likely be of great value in enterprise IoT applications, where an event flow might well extend from a device all the way through an enterprise edge device, through the public cloud, and into the data center. Of course, other latency-sensitive applications would also benefit, and so would any cloud-centric application that relied on component hosting shifting between data center and cloud.

There are obviously a lot of positives to GDC, but what about those “difficulties” that I cited at the opening to this blog? I see two specific ones, and while neither are insurmountable, both could limit Google’s ability to fully realize the potential of GDC.

The first issue is (sadly) hardly uncommon in our market; it’s hazy positioning. The first most people will hear of a new offering like this is a set of stories that come out of the initial material the supplier provides, and that material is, in the case of GDC, a bit muddy. They show a multi-cloud example in their diagram but never explain how that’s delivered. They talk about GDC in some places in a very enterprise-centric way, and then talk about the edge component in terms of telcos and their edge plans, even though they also talk about GDC as an enterprise edge model.

They never make what I think is the key point, except perhaps in a diagram where it’s implied. GDC is an abstraction of the cloud as a PaaS, a platform that isolates users and applications from hosting. Will that point be captured by stories? None of the ones I’ve seen picked up on the abstraction or layered-platform angles, which to me means that Google didn’t nail down the key features that would differentiate it from competitive offerings.

Which are the second issue. The Google Distributed Cloud Edge and GDC Hosted elements are previews (the latter is even pre-preview at this point), so Google has pushed out the GDC story well in advance of anyone’s being able to fully execute on it. That’s not necessarily bad, since exploiting GDC will demand some rethinking of application designs, but it gives competitors a lot of time to take their own shot.

The biggest pieces of a distributed-cloud abstract platform layer are the cloud web services, scope of orchestration and management, and the creation of the connection to the hosting layer. Obviously, both Google’s major cloud competitors have the first of these, and Microsoft has positioned Azure as being a platform-as-a-service framework from the first. Beyond that, Microsoft has Azure Arc, which has some similarities to Anthos for management and orchestration; Amazon doesn’t really have a comparable offering yet. The mapping between the distributed cloud layer and hardware might take some effort, but given the state of cloud competition, it’s hard to believe both Amazon and Microsoft haven’t been looking at this all along.

Google is obviously intent on redefining the cloud, not just competing in a marketing lake that others have already defined and seized. That’s a worthy ambition, and if you believe in edge computing it’s likely an essential one. It’s also a major task, but whether Google succeeds and gains a benefit in the space, or fails and remains a third-place player in the cloud market, I think the notion of an “abstract cloud” is here to stay.

What’s Holding Back the Adoption of New Technologies?

What factors limit the adoption of new technologies? This question is critical for things like IoT, 5G, and even AI, but we don’t seem to have much of a track record in answering it. I used to jokingly say that our view was “technology sucks”, meaning that simply making a new technology available sucked money from buyer pockets involuntarily. That’s not my actual view, of course, but the fact is that a lot of the ways we try to predict technology shifts aren’t much more logical.

From time to time, I’ve mentioned in my blog that “my model says” something, and it always surprised me that few ever asked what the model was. For those who don’t know, it’s based on predicting buyer decisions based on the factors that influence them. Simple surveys started to demonstrate minimal predictive value within five years of when I started doing them, so I thought modeling how decisions were made might help. I gathered information from 277 companies who I grouped into 32 behavior clusters, surveyed to find out how they made decisions, and then took the information from each cluster and scaled it based on the extent to which members of that cluster represented the market. I’ve kept the model updated over the years, and I still use it.

CFOs were obviously a big part of this, because while they aren’t the only factor in technology decisions, they are the executives who set the parameters for how spending is approved. They also see the project plans, and in some cases they even participate in technology reviews. To answer the question I opened with, I want to use my model, but also reference remarks CFOs have made to me about new technologies.

The point I think I should open with is simple; CFOs overwhelmingly support the statement that “a new technology should, and will, receive a more intensive pre-adoption review than one that’s been around, particularly one that’s already been used by the company.” This shouldn’t be surprising; CFOs are rarely technology evangelists. It also neatly raises my next point, which is what makes up such a review.

What influences a buyer in the early consideration of a new technology? The answer has changed subtly over the years. In 1989 when I first asked, the top four influences were our own experience, the experience of a trusted peer in the same vertical, specific influential publications, and the vendor we trust the most with our data center technology. In early 2021, the top four were our own experience, the experience of another major company in our vertical, the vendor we trust the most in the data center, and our most trusted network vendor. Let’s decode the shift.

It’s not a surprise that buyers trust their own experiences first, nor that they trust the experiences of other companies in the same field. We have seen an erosion in “trusted peer” experience as an influence; most buyers don’t now indicate they’d have to know a technology reference. It shouldn’t surprise us to find that the data center vendor with account control has significant influence, even if the new technology isn’t explicitly part of the data center. The major data center vendor is very likely to have a dedicated salesperson or team, and almost certainly will have a vertical-specialized sales contact if they don’t. So far, so good.

One interesting change is that today, the most trusted network vendor has influence they didn’t have in the early years. This reflects two things. First, the data center now has its own network, and that network is seen as a strategic asset almost to the same extent as the computing resources. Second, the network projects the value of IT, and since it’s delivered information that counts and not stored information, that’s key.

The last point may be critical in our understanding of the barriers to new technology. There are no specific influential publications any longer. In fact, media influence on buyers has declined from third place in 1989 to 10th place in 2021. Analyst firms had no specific placement in 1989, but in 2021 they ranked 7th.

CFOs had some insight into why neither media nor analysts got big influence scores. In the case of the media, the relevant comment was “Do you think anyone would be more likely to approve a technology product because it got written up in the media?” As far as analysts are concerned, they hit their high point in influence about five years ago, with a placement of fifth, just out of my top four. They’ve declined because of a growing feeling among the buyers that both analysts and the media are influenced too much by vendors. The “approval” comment is a reflection that neither media nor analysts are seen as a useful source of product insight, but that analysts still have some value as a “validator” when a particular technology is reviewed, or more likely when a specific vendor is selected.

The validation angle is important in understanding technology shifts, meaning the introduction of something new. My buyer research has consistently shown that new technologies tend to come to the attention of buyers because of media buzz. The buyer will, at that point, attempt to gain an understanding of the technology in general, and in particular the specific ways that the technology could help their business. Usually they’ll try to do some research before contacting vendors, but buyers tell me that it’s become more difficult to do that. Websites today rarely provide what they need, the media doesn’t really run educational stories, and so there’s little quality educational material out there.

Vendor data is, of course, manipulative. Educational selling is inefficient, and sales organizations will push back against it unless there’s some compensation for their time. Smaller vendors are at a particular disadvantage when new technologies are introduced because they usually can’t afford to spend time educating the market, especially when educated buyers then usually take something out for bids and they may not get the benefit of their efforts.

The final issue that limits new technology success is fragmentation. For over a decade, we’ve heard phrases like “god-box” and “boil the ocean”, which suggest that taking too big a bite of an opportunity can lengthen the sales cycle, raise more objections, introduce more dissenting views, and otherwise do bad things to the bottom line. There’s an element of reality behind this, and CFOs admit that if you make a “revolutionary” change, you’re likely to displace something that’s already installed and not yet fully depreciated. That raises the cost of the project without raising the benefit, and that makes approval less likely. But often revolutionary technologies only deliver their benefits if they’re adopted in a revolutionary way.

Some of the issues that hold back new technologies can be addressed, but we’re addressing them wrong. The overwhelming response to these problems is to hype, meaning to aggressively position something against any provable set of facts. We see that in nearly everything today. It rarely helps, and it often hurts by focusing vendors on things that, in the end, aren’t going to change anything. Sometimes all that does is delay things, but I believe that we’ve seen technologies killed by obsolescence before they ever got a shot. That may make good copy, but it doesn’t help us advance our computing and networking position.

Tracking Cloud and Data Center Spending Realistically

Numbers are always interesting, and sometimes there’s more interesting if you look at them in a different context. There’s a really interesting piece in The Next Platform on data center shifts, and I’d like to take that different look at it to raise some points I think are important.

The article’s thrust is that the public cloud is the leading factor in data center deployment, despite the fact that the IDC data it cites seems to show that investment in the public cloud declined over last year, and spending on traditional data centers grew. My own contacts with enterprises also suggests that their incremental spending on the public cloud (which of course would likely drive cloud infrastructure spending) grew last year and is growing more slowly this year. So far, we’re consistent.

Where I have a different perspective is in the interpretation of the numbers. Let me summarize what I’ve heard from enterprises over the last couple years to illustrate.

First, enterprises almost universally accept that the cloud is a better platform on which to deploy “front-end” technology used to present information and support customer, partner, and even worker application access. This has always been the dominant enterprise mission for the cloud. During the pandemic and lockdowns, changing consumer behavior (toward online shopping) and work-from-home combined to accelerate changes in application front-ends, and since those were dominantly cloud-hosted, that drove increased cloud spending. The cloud providers responded by building out capacity.

Second, what we’ve traditionally called “core business applications” have always been divided (like all business applications) into “transaction processing” and “user interface”. The latter tend to change faster and more than the former, and you can see that by reflecting on a widely used application like check processing (demand deposit accounting, or DDA, in banking terms). What’s involved in recording money movement doesn’t change much, but how you interact with the process can change quickly. The point is that during the same period, the transaction processing piece of applications, almost always hosted in the data center, didn’t require more resources. Keeping things up to date, meaning the replacement of obsolete gear, was the only driver.

Third, the pandemic put a lot of financial pressure on companies, and that pressure encouraged them to control their spending. If sustaining the top-line revenues in the face of change demanded more be spent on the cloud, the changes to the data center would logically be deferred as much as possible. They were, resulting in data center spending dips last year. However, what’s deferred isn’t denied, and so this year the data center got its turn at the well.

The article, like much that’s written on this topic today, seems to take the position that we’re on the road to moving everything to the cloud, that company data centers will pass away. The current shift is seen more as a bump in the road than an indication that the total-replacement theory isn’t correct. Recall that recently, the CEO of Amazon’s cloud business admitted that some workloads would never move. The article says “At some point in the future, maybe in 2030 or 2035, maybe this way [the data center way] of computing will go away. Or, maybe it will just live on forever as a niche for doing back office stuff.” I think the total-replacement theory is wrong, but that doesn’t mean that over time we don’t see more cloud in our infrastructure. It means we have to think of applications, clouds, and data centers differently.

Let me quote another line from the article: “We were born in the mainframe era, and there is a good chance that when we colonize Mars, the payroll will be done by some punch card walloping mainframe that is gussied up to look like an iPhone XXVI with a pretty neural interface or some such nonsense.” This view, which you have to admit sounds a bit condescending, reflects another misconception that relates to how we think about applications and how they’re hosted.

Let’s go back to the bank example. If you look at our DDA application, you realize that while there’s a lot of transactions associated with check-cashing, a lot of doing business is tied not to the transactions but to their result. There are statements, there’s regulatory reports, there’s that payroll and personnel stuff, and there are the myriad of analytical reports that are run in order to manage the business, optimize profits, and plan for the future. One bank told me that the result of a check being cashed was referenced on the average almost a dozen times by analytics and reporting applications. All these references, in a cloud world that charges for data movement, would raise costs considerably if they were moved to the cloud.

We just had a couple of major problems with online services, most recently the big Facebook outage, caused by Internet-related configuration errors. If you lose your network access to the cloud, you lose your applications. Stuff in the data center could be impacted by network problems, of course, but the data center network is more contained, has fewer components, is simpler, and is under a company’s direct control. Losing the cloud part of an application is bad enough, but to lose all data access? Company management tends to worry about that sort of thing.

Then there’s the security issue. Just last week, Amazon’s streaming game platform, Twitch, was hacked, creating what was described as the largest data compromise to date. This is Amazon, a cloud provider. Sure, enterprises have also suffered data losses, ransomware, and so forth, but most businesses are more comfortable with risks that are under their control than risks that they can’t control.

All these points, in my view, illustrate the most important conclusion we can draw from the data in the article. The division of application functionality between cloud and data center has been responsible for cloud growth; the focus on the front end of applications, and the user experience, focuses investment in the cloud because the cloud makes a great front-end. To impact that division, and the spending pattern that it creates, there would have to be a fundamental shift in cloud pricing and perception, a shift that I think would likely hit the cloud providers bottom line immediately, and might well not generate enough change to offset the loss.

Technology change isn’t mandatory for businesses. We adopt new things because they serve us better, and when we understand just why that is, we can optimize those new things so that we can justifiably adopt them faster and more broadly. The cloud is such a thing. There’s no reason to assign it a mission it’s not going to fulfill, a mission that hides its real value and potentially reduces development of the tools that would magnify that value. Reality always wins, and facing it is always the best idea.

Geography, Demography, and Broadband Reality

One size, we know both from lore and from experience, doesn’t fit all. The same is true for access technologies. We’re reading stories about the rise of fiber, like this one that quotes Consolidated Communications saying “There are some mobile or temporary use cases where FWA is best, he says, but for the majority of customers, fiber is more cost-effective for Consolidated to deploy.” Just how true is that across an entire national service geography?

There are also stories like this one, saying that broadband in the US is worse, and more expensive, than it is in other countries. Many have a problem understanding why that would be the case, given the level of tech commitment in the US. Many wonder why we have, as the story suggests, 7% of the population who don’t have access to reliable broadband. I think the two questions I’ve asked here have a related answer.

Suppose we take a mile of fiber and air-drop it arbitrarily on some occupied landscape. In an urban area, that fiber could well pass hundreds of thousands of potential customers, and in a rural area it might miss everyone. The return on infrastructure investment would be high in the first case, and zero in the second. That alone says that there is no single answer to the question “What’s the best broadband technology to empower a given country.”

I’ve used “demand density” for decades to measure just how profitable broadband access would be, overall, for a country. Demand density explains how Singapore or Korea have such great, and inexpensive, broadband, compared to countries like the US, Canada, and Australia. Among a dozen sample countries, demand densities vary by a factor of 35 times. That mile of access fiber passes a lot more people in come countries than another! But most countries have multiple access providers, and many of those serve limited geographies rather than the country overall. What does breaking down a country do to our calculations?

Obviously, you could calculate demand density for any geography where the underlying data was available, which includes things like GDP, occupied area size, and road miles. I’ve done that for the US market, for each state and for AT&T and Verizon’s territories. AT&T serves a more rural territory, and that shows in their demand density, which is a seventh that of Verizon. That explains why Verizon has been pretty aggressive in Fios deployment, relative to its main competitor. On a state basis, things are even more dramatic; the highest state has almost 250 times the demand density of the lowest.

Returning to our fiber air-drop, we can see that for any given country, and any given operator territory within it, there would be a huge variation in demand density depending on where our fiber landed. That variation would be reflected in the business case for fiber access, or any other access technology. The more the variation, the less likely that something like universal fiber would be the best choice for the operator.

Another interesting point is that if you dig down even deeper, you find that almost every country has small areas, postal-zone-size, that have very high demand densities. Among industrial economies, the demand density of these high-density areas are fairly consistent and all are sufficient to justify fiber access. Similarly, they all have low-density areas where anything other than an RF technology is unlikely to be a practical broadband option.

This is important because how customers are distributed across the spectrum of demand density within an operator geography sets some policy constraints on the operators’ broadband planning. Sometimes regulators impose a mandate for consistent broadband, and sometimes it’s a matter of customer response. Would you want to offer super-fast, inexpensive, broadband to a fifth of your market, with the rest having substandard service that might actually cost more? Digital divides are more than abstract when your own deployment plans create and sustain them.

When we hear stories like the one I cited, it’s easy to extrapolate the fiber success to an operator’s entire geography, or even to a whole country (like the US, in this case). That’s a big mistake. When we consider issues like public policy relating to universal broadband at a given minimum speed or with a specific technology, that same kind of extension is another big mistake. One operator told me that they have customers whose 1 GB Internet connection would require running five miles or more of fiber to those customers alone. Will taxpayers consent to subsidize that kind of cost?

If we set the overall US demand density as 1.0, then my modeling suggests that where demand densities are roughly 5 or 6, you can make fiber/PON work on a decent scale. There are 12 states where that’s true. If we’re talking about 5G/FTTN hybrid broadband, a demand density greater than 2 would work, and roughly half the states could make that work on a large scale. With cellular broadband using mobile 5G technology, 47 states could provide decent service to a large percentage of the population.

These numbers show us a lot about what’s happened, and what’s happening, in the broadband access space. Verizon jumped ahead in fiber deployment because their geography generated a higher ROI. Countries like Japan, Korea, and much of Europe are “ahead” of the US in broadband for the same reason. Google targeted only select areas with their fiber, because those areas had decent demand density and the operator incumbent(s) in the geography, serving a larger area that included Google’s target, couldn’t do fiber through enough of that larger area to make it a wise decision.

Even municipal broadband and municipal partnerships with operators to deploy fiber can be explained here. A city has little pressure to deny its residents and businesses fiber broadband because they can’t provide that same service to other cities. Fiber broadband to high-demand-density pockets is likely to come about increasingly because technology improvements make cherry-picking high-demand-density areas profitable, as long as it doesn’t generate backlash from customers of the broadband provider who live outside those areas. And, of course, eventually our big telcos and cable companies are likely to see that new source of competition and take the risk of creating have-not customer angst.

Then there’s the competitive impact of 5G in any form. T-Mobile has just cut the price of its 5G-based wireline broadband alternative by about a quarter, and mobile operators could in theory use 5G to compete with the fixed broadband offerings of other operators. Wireless doesn’t require you trench vast quantities of fiber or cable in another territory; all you need is some nodes and/or towers, which of course mobile operators already have.

Consolidated may be right that for most of their customers, fiber is best, but that’s not the case for most customers overall. Fiber will be preferred where demand density is high, but it’s likely that 5G will be more transformational in fixed broadband applications because it’s more versatile across a wide range of demand densities. Operators with limited geographies may be able to deploy fiber to nearly all their customers, and niche fiber plays will surely spring up where densities are high, but universal broadband needs a solution we can apply broadly, and all the wishful thinking and hopeful publicity in the world isn’t going to turn fiber into that solution.