Reading the Cloud Tea Leaves

Cloud computing’s Big Three roughly met Wall Street expectations in the first calendar quarter, but the expectations factored in a continued deceleration of cloud growth. Microsoft managed the best growth numbers, followed by Google, and AWS was third. In my view, none of this should have been much of a surprise, but I do think some thoughtful analysis of the results is in order. There are macro reasons for slowing cloud growth; cloud activity tends to mirror purchase interests and obviously the economy is in a bit of a mess. However, I think that the total picture is more complicated than simple macro forces.

One complicating factor is increased competition in the cloud space. While the combined market share below the Big Three is relatively small, cost concerns about cloud services have been increasing, and enterprises have told me they’re looking deeper into the list of potential providers. That means companies like IBM and Oracle can increase their own sales, at the expense of incumbents. The same forces are likely sustaining Google’s growth relative to Amazon; I’m hearing a bit more interest in Google from both enterprises and startups or OTTs.

I think Amazon’s trailing growth can be partly attributed to the stress on startups, since that’s always been a major source of business for Amazon and figures far less in the results of other cloud providers. Amazon’s cloud comments have been rather trite, but they did comment sales interest was driven by cost savings. Wall Street research has said that part of Amazon’s growth issue comes from helping cost-conscious customers move to lower pricing tiers, and I find that interesting. First, because the AWS enterprises I’ve chatted with don’t indicate that “savings” was as big a factor in their recent cloud decisions than “cost avoidance”. That would reduce cloud spending rather than creating growth. Second, because cutting costs is a factor with companies already in the cloud. This supports what I’ve heard from enterprises, which is that slack cost control in application design has created cloud cost problems. Movement to the cloud is not to cut costs, but to increase them less. Once there, enterprises increasingly realize they need to try to optimize applications in a way that they should have done from the first.

It follows that Amazon’s future may well be tied to the future of startups, and that’s looking particularly uncertain right now. While many Street analysts still say Amazon is going to stay in the market-share lead, there are also many who think they have challenges, and I vote with that group. Because Amazon doesn’t have any meaningful enterprise premises business, no office tools, no hosting software, they are perhaps a bit more committed to the notion that everything has to move to the cloud. That they’re supporting startups who were never anywhere else (though some are now saving a boatload by self-hosting) likely hardens that view. If that’s the case, then Amazon’s tools may support a front-end vision of the cloud but their marketing/sales work doesn’t, and that means they may not be the first player to come to mind when CxOs think about a cloud commitment.

Google admits that customers are looking for ways to reduce cloud costs, but their own growth rates were strong. Part of that, as I’ve noted, is likely due to price shopping on the part of Azure and AWS buyers, but I’ve noticed Google getting mentioned a bit more often by both enterprises and operators, and so there’s also likely some growth from new adoptions. I’ve believed all along that Google has better base-line cloud technology than either Amazon or Microsoft, but they don’t have the enterprise connection of Microsoft or the web service kit Amazon has. If they want enterprise success they’ll need to push more for business tools, so in the near term they’re likely dependent on operator interest. Their Nephio project is an indication they realize that, but it’s still a base-line tool and not something like 5G suite.

The problem is that I don’t believe that the telcos are going to be enough, even if we don’t consider that Google is well behind Microsoft in the telco space today. I think all the cloud providers have tended to kiss telco-strategic babies when they should be pushing cloud-strategic concepts, but for Google this is a particular problem because of Google’s superiority in basic cloud tools. They’re not exploiting their own top assets. Even if they do, given the glacial speed of telco progress in new technology areas, Google needs enterprise tools.

They have some, in their suite of office tools that compete with Microsoft 365. They are now starting to push these more effectively, and to integrate them with cloud applications the same way Microsoft does. They also have Kubernetes and Istio, but perhaps because of their software legacy they’ve almost downplayed their role in both these and other platform tools because they’ve turned them over to the CNCF or some open-source group. That’s a good step, but so is blowing your own horn. Google’s big problem is not having any solid Teams-like offering. Google’s collaborative tools, starting with Hangouts and then Chat and then G Suite and now Workspace, have had too many re-launchings to build confidence. That only makes it harder to establish Google as a Microsoft competitor, and Microsoft is the gorilla in the room.

Azure sure seems to be the real success story in the cloud, based on quarterly growth numbers. Microsoft has had an advantage over other cloud providers because of its presence on the premises, and that continues to show in their cloud numbers. Microsoft has more actual “cloud migrations” than any other provider, and despite the growing interest that telcos are showing to Google Cloud, Microsoft still leads the pack in the telco space from what I see and hear. They have specific “Cloud for…” offerings for key verticals and even “horizontals” (sustainability is an example) and that’s helped them in reducing the length of the cloud sales cycle.

Another benefit Microsoft brings, one that’s not always acknowledged, is their strong presence in the software development space. Visual Studio and GitHub are fixtures in the programming space, and while they can be applied to any cloud or data center environment, Microsoft’s documentation on their tools and capabilities not surprisingly feature examples from Azure. Rust, perhaps the hottest new programming language, is supported on every cloud but Microsoft’s Visual Studio IDE is the preferred platform for development and the Rust examples Microsoft offers are for Azure.

Microsoft also has Teams and Microsoft 365, and being able to integrate document production, collaboration, even phone calls, with Azure cloud applications is a powerful capability. Microsoft is working hard to advance cloud-symbiotic tools and enterprises are responding with more cross-tool solutions that still rely on Azure for the application hosting. They’re now pushing AI strongly, with OpenAI Service in Azure and Microsoft 365 Copilot. Microsoft has its fingers in almost every business pie, and that reach gives it a real advantage in the cloud, one that I think Amazon and Google will need to counter specifically if they want to beat Microsoft. Given that Microsoft is constantly adding to its inventory of symbiotic business tools, and that it has a long-standing position in key areas like software development and office productivity, it’s going to be a challenge for competitors to catch up. They’d almost have to buy into Microsoft’s tool strategy because users are already adopting it, and that’s a very weak competitive position to take.

In addition to their initiatives in AI and their incumbency in development, Microsoft also seems to be making a major thrust in the edge computing space. So far, this isn’t likely to contribute much cloud revenue because edge hosting in the cloud isn’t widely available from anyone, including Microsoft, but Microsoft wants to be sure that it gets edge development going in a direction that favors its own offerings. To prep for this, they’re defining the edge as the customer edge and they’ve created a “fog computing” category to express what edge-of-cloud hosting would do.

I think it’s interesting that Microsoft is planning for an application evolution that retains and even expands customer hosting of at least some applications/components, while at the same time blowing kisses at the “move to the cloud” story. I guess a part of that is reluctance to dis the value proposition that’s captivated the media; if Microsoft were to tell the blunt truth about cloud adoption their competitors would jump on the move as showing Microsoft isn’t fully committed to the cloud. But telling the truth is generally a good idea in the long run, and Microsoft may be approaching the point where they’ll need to reposition themselves to accommodate what enterprises are now realizing—the cloud isn’t where everything is going.

Can Money Be Made on Digital Twinning?

Is it possible to make money on a “digital twin” metaverse, on the empowerment of workers not normally empowered, on the binding of consumerism to consumers in a better way? I’ve talked in past blogs about the technology of such a metaverse, and I’ve brushed a bit at the things that it could benefit, but we need to be hard-headed here. Can we see a digital-twin metaverse creating an ecosystem that would be profitable enough to encourage participation? If not, technology is irrelevant.

The essential notion of digital twinning is to create a computer model of a real-time system, in order to better assess its overall state and support its operation and use. The information used to create the model is presumed to be gathered from sensors, which means IoT. The “support” could take the form of assisting human workers or driving automated systems through direct control.

From an architecture perspective, such a system would consist of four parts. First, the model, which is presumably a dataset that collects not only real-time information but also information inferred from that primary source. It’s the “digital twin” itself. Feeding it is a collection of sensors and the associated software, and that’s the second part. Third, a set of processes running off the model would be assigned the mission of establishing state and matching goals to the state. Finally, a set of processes running off the model would interface the model to external elements, which to me would include both humans and automated processes.

None of these pieces are particularly challenging. I contend that a reasonably qualified team of a dozen programmers/architects could produce a prototype of this system in six to nine months. Would it be comprehensive, merchandisable? Surely not at that point, but it would be enough to demonstrate a credible model and attract support, providing of course that it was offered by a credible player. But would such a credible player emerge without having the support that requires a credible model to build? That’s the challenge here.

Breaking the chicken-and-egg loop here would likely require a clear addressable opportunity. To do that, we start by saying that you could divide the labor force you’d want to consider into three groups, the “office” workers, the “retail/service” workers, and the “product and production” workers. Roughly half of all workers work at a desk and so could be considered “office” workers. The remainder are fairly evenly divided between the other two groups.

Most of the currently active empowerment strategies target the office group and a limited number can also target workers in other groups. It’s hard to gather data on this except anecdotally but my contacts suggest that 70% of office workers are targeted and about 8% of each of the other groups, which means that we actually work to empower about half of workers with current technology. I argue that it’s the other half that need the digital-twin metaverse, and that would mean that we could use it to open an IT opportunity as large, in worker population, as we already target. That should surely be enough to gain credibility.

Obviously it has not, at least not in an effective way. Enterprises with a lot of workers in the second and third groups tell me that there are three issues. First, they’re not actively promoted by vendors or providers of service to target these two groups. Second, what applications and services they see are specialized to a job and/or industry, intersecting a small part of those two groups within their companies. Third, technologies and practices for addressing workers in these groups are different, offering no resource or management economies.

What nobody mentions is the difference in the way that various jobs and industries have to be assessed and empowered. For example, retail and service workers typically have lower unit values of labor, which means that in order to recover a given investment in empowerment you’d have to empower a larger number of workers through any contemplated project. In all the major job classifications recognized in the US, there are some that can provide a relatively high unit value of labor, some that have large numbers, but interestingly none that have both. That makes efforts at empowerment a balancing of the two factors, and every job and industry seems to require its own balancing. In some cases, this means taking a fairly innovative view of how you might address them.

I had an interesting chat with the CIO of a restaurant chain, who reached out to me to understand the way “digital-twinning” might work for a restaurant. It turns out that rather than considering a worker as a target, you have to consider the restaurant as the target, meaning you model the restaurant as an ecosystem. That shouldn’t have been a surprise given that this would be true for empowering assembly-line workers as well, but it points out that worker empowerment doesn’t necessarily mean twinning a worker, and in fact probably doesn’t. It means twinning the real-time system that workers are a part of, and that’s in my view the underlying problem with addressing those two underserved sectors.

Creating a model for a workplace, which is really what we’re talking about here, is a three-level challenge. First, it takes somebody who has a detailed understanding of the workplace and the work to do that, which means a line person. That person isn’t likely to understand the process of twinning, so you also need a model specialist to translate the workplace/work insights into a model. Then you need the actual modeling tools, the model, and the interfaces between the model and the real world. That means that somebody has to support the general notion of digital twinning, and that seems intuitively like a heavy lift for a vendor who’s likely trying to pick some opportunity low apples.

Years ago, I worked on software for a company who was having a major problem with fixed asset tracking. They had computers, typewriters, desks, chairs, tables, and so forth, and of course when people moved around some of these moved with them. Other stuff stayed where it was, and some stuff disappeared. The company had tagged the asset types with serial numbers, and annually they did an inventory, sending people into each space to record what was there. They wanted to do this faster and more often, and their thought was to put an RFID tag on each asset so the process of recording it would be quicker. They envisioned an inventory type walking to a room, shooting a room tag with an RFID gun, then entering and shooting the asset. From that, they could construct a “digital twin” of the spaces and contents.

The problem was that the twin quickly lost connection with reality, which meant you had to do inventory more often, which was more costly. The solution was to put an RFID scanner at the entry points to each space, so when something was taken out or brought in it was recorded. Now the map was up to date. However, you can see that this application, having been developed from scratch and targeting this problem, wouldn’t create much that could be leveraged for other “twinning” missions.

Most of the applications that could be addressed through digital twinning, even those being developed today, aren’t built on a general digital-twin model. I think the biggest challenge we face in addressing new empowerment opportunities, even opportunities in consumer empowerment, is that of silo development. We’ve probably spent ten times what it would have cost to create a general model that could have propelled the whole space forward, and we still don’t have that model.

We also don’t have broad recognition of an almost-as-important truth, which is that a digital-twinning model is a general case of a social metaverse, and that metaverses and digital twinning are two faces of the same coin. That means we’re at risk to directing metaverse development into the creation of another set of silos. Any virtual reality has to start with a model of the reality. Any participatory virtual reality has to integrate the real world with the model. Whether we visualize the result or use it to empower workers or automate processes is just the way we manage the output. The rest is all the same—a digital twin model.