The first full week of January is a traditional time for the Street to be looking at IT spending for the coming year, and this year is no exception. We have predictions and analysis for networking, data centers, cloud computing, and pretty much everything else. I also had an unusual number of CIOs contact me in late December and early January to complain about the state of things. Thus, what I hope to do here is combine the data points.
The complicating factor in dealing with Street research is that it’s obviously focused on Street issues, meaning stock prices and exploitable, investable, trends. That almost always makes it hard to synthesize real and significant macro market trends. They see a disease not holistically but as a set of symptoms, and so what I’m going to do is try to take their raw information and macro-ize it. You can always read Street pubs directly if you want their stock tips.
The macro trend that’s most influential in IT planning and spending is the compacting of the value chain. Companies are accustomed to reaching their customers directly through advertising, and the web ad space is only magnifying that trend. Now, they’re trying to reach their customer more directly in other ways, through direct purchase portals, compacted distribution, and so forth. They also want to integrate online advertising/promotion and online marketing/sales. This has been the largest factor in the increased size of data centers, and the largest source of data center spending.
What’s become clear in 2019 is that “data center spending” has put too much of the implementation of this new integrated model of marketing and sales into the data center itself, making it less responsive and elastic. Transaction processing accounted for 88% of data center spending in 2010, 85% in 2015, and only 75% in 2019. In terms of total IT spending, it accounted for about 80% in 2010, 72% in 2015, and only 62% in 2019. The difference is that public cloud and multi-tenant data center (server outsourcing) have taken a bigger bite.
What CIOs are planning for 2020 is to take the non-transactional pieces of data center spending out of the data center in 2020 and beyond. This isn’t the same thing as “moving to the cloud”, it’s really recognizing that they took the wrong (but traditional) path of in-house implementation on the early phases of shifting their focus to online promotion and fulfillment. This process is bringing an issue to the fore that CIOs had previously ignored, which the lack of a unified “hybrid” model for cloud and data center.
Anyone who’s ever written a core business application recognizes that “online transaction processing” or OLTP is the centerpiece of a rather vast business ecosystem. What it takes to relieve or add to inventory, to pay or receive bills, is nothing in comparison to what it takes to perform all the surrounding business functions. I looked recently at the code that updated a retail database that tracked a hundred thousand products. The actual update, including accommodation for distributed transaction processing and multi-phase commit, was 32 lines. The whole retail application was over two hundred thousand lines.
What this means is that if you think of cloud and data center as being the front and back ends, respectively, of applications, you miss the question of how much of the current back end really belongs in the back, given trends in application design. The linkage between database maintenance and activity reporting is obvious if you have a monolithic application, but in the world of distributed components and complex workflows, there are a lot of places you can obtain all the data you need. If some of those places are close to the user (which they are) then moving some of this stuff to the front-end piece is logical. However, moving things to the cloud often means moving workflows across cloud boundaries, which may generate additional costs…and then there’s the development challenge.
In the public cloud space, the Amazon and Microsoft positioning on hybrid cloud reflects this old/new vision tension. Amazon’s cloud approach has focused on the cloud itself, implicitly saying that the cloud could do its own thing and simply jigger a connection with the data center back-end processes. Microsoft, from the first, has implied that the future application model itself has to span that front/back-end gap.
Where Microsoft has an advantage in the cloud space is that they have always thought in terms of a PaaS model for the cloud (Azure pioneered that), and they’ve always had a strong linkage between their cloud services and their development tools. They’ve exploited that linkage in a way that builds confidence of corporate users, users who have developers on their staff but aren’t professional software development organizations.
But perhaps more important than even this is the fact that Microsoft, because its cloud strategy is a PaaS, has a holistic vision of a cloud application. If you recognize “cloud development” and support (via a strong developer network and product suite) “data center development”, you can begin to align the two and eventually converge them. Of all the cloud providers, who besides Microsoft has a real developer tool suite, complete with middleware, languages and language processors, and even a well-developed community (MSDN) to support things?
The Street touts the cloud like the inevitable end-game for everything, and I don’t think that’s the case. Hybrid cloud is the preferred model overall, so the question isn’t when “everything” goes to the cloud, but how the model of the cloud (technical and economic) can maximize what can be made to work there effectively and at tolerable cost levels. That depends, more than anything, on having a model of cloud applications that frames design in a way that allows free migration of things to the cloud. Secondarily, it depends on cloud providers working their pricing model to encourage the optimal use of the cloud, the use that an optimized application model can generate.
It’s pretty clear that containers and Kubernetes will play a major role in the cloudification of applications, but neither of the two establishes an application model, a specific strategy for program design that lets the cloud play as big a role as could be economically justified. Microsoft has a better handle on how to do that because they’ve supported programmers for decades. They also have a broad relationship with users of all kinds. You have to commit to the cloud to commit to Amazon, but there’s little you could be doing, or plan on doing, in software that doesn’t make Microsoft a potential partner. If Microsoft makes that first step a step that could take you to Azure, then they’ve got a big leg up on others.
The problem with this happy-for-Microsoft situation is that they’re really no further ahead than their competitors in framing what a true “cloud-native PaaS” might look like. Some of the problem lies with the fact that this is a subject that nobody other than a software professional is likely to understand, and surely nobody but one of those could write about it. That we’re all reading cloud-native stuff that’s done by people who don’t program is a given, but my own interaction with that group of CIOs shows that even CIO-level people are unsure about where things are going, or what they might look like when they get there.
Monolithic programming practices led to monolithic languages and design paradigms for software. We may now have come a bit too far in compensation, talking about things like “microservices” and “lambdas” as though an enormous mesh of connected micrologic elements could add up to an efficient piece of software. The truth is that a lot of software is monolithic because that’s the best model. The “cloud-native” approach, if that means nothing but microservices connected in a service mesh, is an invitation to interminable run times. The industry needs a model for the hybrid cloud of the future, and the question is, and has been, who’s most likely to provide it.
Google’s opportunity to re-launch its cloud initiatives comes from the current confusion. Google actually has the most solid model for deployment of cloud-native applications, which emerged from its own deployment of search and content elements—Kubernetes. The problem with Google, which I think is also a problem for Amazon, is that it doesn’t understand the transactional world, the core business applications part, as well as Microsoft. It also lacks the pride of place at the table of data center planning, which forces it to try to revolutionize the dog (the data center’s dominant investment) by shaking the tail (the cloud).
Complicating all of this is the fact that the future model for hybrid cloud could well emerge from the data center side. After all, that’s where current software/server investment is focused. In the data center, both IBM/Red Hat and VMware seem to be working on their own Kubernetes ecosystem, banking on the fact that neither Amazon nor Microsoft will come up with the right hybrid formula by pushing from (to repeat my prior analogy) the tail side. IBM has intrinsic credibility in the data center, and VMware has what could be a logical model for hybridization. Neither, so far, has framed a software architecture to span that hybrid gap, though. Without that, there’s still a chasm for CIOs to cross, and they’re worried about crossing it.
This is the real battleground for 2020, folks. Who will define the “virtual middleware” whose APIs frame the future hybrid cloud? The winner is going to change the industry.