Just what benefits drive or justify cloud adoption? This is a question you’d think had been asked a million times since the dawn of the cloud era, but in surveys and discussions with CIOs, I’m finding that’s not been the case. Companies had largely accepted the widely publicized view of cloud benefits until about 2016, and since then they’ve been demanding a stricter business case be made on cloud projects. In the process, they’ve generated some data on the cloud and its role in future IT. Some of it is very interesting.
I want to start this blog with a note, or warning if you like. The factors that determine what value the cloud will bring to a given application or even a given company are highly variable. The work I’ve done on the question of the value of the cloud reflects broad statistical trends, and those are useful to planners looking to make high-level decisions. While I’m still getting a small number of new data points, they aren’t shifting the results enough to justify holding back on this discussion. But remember, there’s nothing that my work can do to say whether a single application, or even a single company, can benefit from the cloud at all, much less how substantial the benefit can be. Look at this material as general guidance, an almanac and not a cookbook for cloud adoption.
Policies evolve, so we have to open with some cloud history. Companies tell me that when they first looked at the cloud, they were of the view that public cloud services would be “cheaper” than data center services (every story on the cloud said that, after all, and most still do). The cost reduction was first seen as a server-versus-IaaS cost difference attributable to “economy of scale”, but then broadened to reflect total cost of ownership (TCO). Many companies admit that this broadening was a response to the fact that early cloud projects based on narrow server-capital-cost-versus-IaaS-hosting quickly plucked the low apples and didn’t result in much cloud adoption at all.
The early cloud projects that did succeed were migration projects, largely related to the then-popular topic of “server consolidation”. Many companies had been buying relatively inexpensive servers for specific line department missions, many of which could have been done in the data center, simply because the purchase authority of the line managers allowed the buy. Those managers said it was easier (and sometimes cheaper, given how IT costs were allocated in their company) to buy and deploy a server and applications than to have IT handle them. The result of this was a lot of under-utilized servers, and moving these to the cloud offered a cost savings.
That raises the first point about cloud benefits. Everybody buys commercial off-the-shelf servers, and while big cloud providers may get a slightly better price than a big enterprise, the cost difference is almost never enough to cover expected cloud provider profit margins. Thus, a server in the cloud is almost never going to be cheap enough relative to one on premises to justify a shift, unless the premises server is poorly utilized. This truth came late to the CIOs I heard from; most said that they didn’t really understand the early cloud opportunity until 2014, and nearly a quarter said they hadn’t gotten their benefit facts straight even in 2016.
In the next, “TCO phase”, of cloud justification, companies argued that even though their own pure hardware and environmental costs were at least competitive with those of the cloud, the cost of operations management for application hosting achieved an “operations economy of scale” in the cloud that was very real. And yes, this was true in a number of cases, but most companies quickly learned that IaaS public cloud services didn’t really impact the majority of their operations costs, which were associated with the software platform (OS and middleware) and application lifecycle management. At the end of the day, based on my current data, the combination of capex and opex savings for application migration to the cloud is unlikely to make a business case for more than about ten percent of applications run traditionally in the data center. Again, this seemed largely consistent with company experience as related in 2019, but it wasn’t clear at the time.
By about 2016, a slight majority of companies had realized that the problem of cloud justification based on cost was that application models that were likely highly beneficial to cloud hosting could never have been justified on the premises and thus weren’t there to be migrated. Could it be, they thought, that the real benefits of the cloud aren’t focused on what we run today, but on what we don’t? It turns out that was true, but who wants to redo every application to optimize its cloud compatibility? For third-party software, you’d have to wait for the provider to do that in any case. And about a third of companies told me they couldn’t realistically redo even their in-house applications because of the major cost, resource needs, etc. Could they ease into the cloud, somehow? Yes, and that ushered in the next period of cloud benefit analysis and justification.
Most companies were already looking at improving their applications (application modernization, or “appmod”), with “improving” meaning creating a better user quality of experience for their workers, customers, and even business partners, or what we’d call “UX” today. In both 2016 and 2017, businesses reported this to be their highest development priority. In most cases, this UX improvement stuff focused on the graphical user interface or mobile app interface to applications rather than on changes to the core (usually transaction-processing) piece of the applications. That meant that if the cloud could be used to “front-end” traditional applications, the cloud-specific changes would be confined to areas where application modernization was already being funded. This front-end-cloud mission is the current dominant model for enterprise cloud adoption.
Using the cloud as a front-end to traditional IT can generate at least a minimal cost savings for about 30% of current applications according to my model, and with the addition of the scalability and resilience benefits, most of these can then justify cloud adoption to front-end existing data center IT. This doesn’t eliminate the data center applications, only the incremental development for the front-end modernization, but it’s still a market larger than the “migration” market for cloud services. This is the next low-apple cloud opportunity, the one we’re working against today in most cases. It’s also preparing us for the next big step.
As a front-end technology, the cloud’s ability to support scaling (and “un-scaling”) as workloads change, to replace failed components, and to facilitate rapid development practices (continuous integration/continuous development or CI/CD) to optimize the response of applications to business change creates a powerful business case. The front-end mission also had the benefit of focusing the cloud and open-source community on the development practices that promote all these benefits. This started the current “container wave” of application development and deployment.
Containers, as I’ve noted before, are more like portable units of work than like an alternative to VMs. Container hosting (which is an alternative to VM hosting) is more efficient, and this means that you can get more application bang for your hosting buck with them. It doesn’t alter the cloud business case in a pure hosting-cost comparison because you can host containers in the data center too, but container operations and orchestration overall are a good match for small application components, leading even to what are called “microservices”.
What’s happening now is that we’re building on the container concept to create the software foundation for the next step in cloud justification evolution, which is arguably the big step—the concept we call “cloud-native”. But cloud-native is a lockstep progression, a kind of technology three-legged race. On one hand, we have the opportunities that line organizations have to incorporate a new model of application, one that was never constrained by either the limitations of the data center or the limitations of application architects trained in the data center. On the other we have the toolkit necessary to build applications with. As long as our tools build monolithic applications, they’re not suitable for cloud-native, and their limitations will hamstring the education of developers and CIOs in cloud-native principles. About one business in five today thinks they’ve mastered cloud-native.
We already know that containers are the foundation of the cloud-native future. Kubernetes has become the irreplaceable tool in container orchestration, to the point where no business that has organized IT should be considering anything else. We’re starting to see a collection of tools coalesce around Kubernetes to create an ecosystem, the middleware kit that will be the platform on/for which cloud-native applications will develop.
Cloud-native will extend the targets for cloudification in two ways. First, it will allow enterprises to modify the “front-of-the-back-end” part of their current IT applications to fit cloud principles. That allows cloud benefits in elasticity and resiliency to be extended to more of the current application base. Second, it will allow developers to frame new applications that are more contextual, meaning they tune themselves to the environment their users are inhabiting, making workers more productive and consumers more eager to spend. The first piece of that shift generates a business case for about 35% of current applications (actually components of applications) to be migrated to the cloud. The second eventually generates about a trillion dollars per year in additional IT spending by unlocking more productivity and revenue. This is the long-term future of the cloud, making it coequal to traditional computing and data centers, then actually surpassing them…eventually.
But we’ve got to get back to that one-in-five who get it. Cloud-native is, by a slight margin, an accepted goal for enterprises I’ve heard from, but interestingly it’s those who depend on third-party software that think it’s going to happen within the next two years. Those who have to self-develop push the timeline out another year or two. But third-party software vendors themselves seem less optimistic.
I think the combination of tool development and concept acceptance is critical to realizing this vision, and also in establishing the timeline. It is in truth easier to push the third-party software market to cloud-native because the cost of change can be amortized. Third-party software dominates the market these days, and that means that the developers of that software have a critical impact on the pace of cloud-native evolution. However, software providers are going to drag their feet until there’s an established “cloud-native-platform-as-a-service”, or middleware and API set, that they can use for their own development and expect to have support for in their target buyers’ environments. Until that’s in place, we can’t expect them to commit to a transformation to cloud-native, and we don’t have it yet.
This is a highly simplified vision of a very complicated process. The key question at each phase of cloud evolution, including the current one, is how we balance the cost of adoption in that phase with the benefits cloud adoption would bring. That’s what makes this so application-and-company-specific. I’ve worked with and for companies who ran applications that hadn’t been changed in 20 years, and in fact couldn’t be changed now because they’d lost the source code or skills. Other companies who have been working hard on things like CI/CD have probably componentized their software already, and could adopt cloud-native practices with much less effort, providing there was an established platform or framework they could work toward (back to our three-legged race).
How things would net out to a company under favorable conditions can be statistically assessed. Ten percent of applications could be expected to be migrated to the cloud under server consolidation mandate. Another 30% of application code, created by projects aimed at improving UX, could be justified for cloud hosting if the UX improvements were made in a cloud-centric (or cloud-native) way. Another 35% of application code could be shifted to the cloud by applying cloud-centric/native principles to the portion of data center apps where the cloud front-end touched. That adds up to 75% of application code eventually being made cloud-ready (some of it will still run in the data center for governance or overall cost reasons). In addition, every company could justify increasing IT spending by about 1.8 times if they built contextual applications to improve productivity and sales, using cloud-native principles. That summarizes the statistical picture of cloud opportunity as my model presents it today.
Over how long? I can’t say. I pooh-poohed the whole idea of “migrating to the cloud” from the first, but in the heady days of cloud hype nobody believed that it wouldn’t sweep the data centers aside, so nobody thought about what the future would really need. In fact, we didn’t see any rational tools to support a cloud-centric future develop until about 2016, about a decade after public cloud services came along. That’s hardly a breathtaking drive along the cloud path so far, and if ignorance and hype stalled things early on, there’s little sign those two factors won’t continue to hurt us.
My guess, based on highly imperfect modeling? We can expect to see the UX-improvement opportunity realization peaking in about 2022. At about that time, we can expect to see real growth in the cloud-native contextualization opportunity, but that won’t hit its stride for another five years. By the end of the next decade (by 2030), we can expect to have a true cloud-native world.