The Impact of the Cloud on Enterprise IT

The impact of the cloud on enterprise IT, either technology impact or financial impact, has been a topic difficult to navigate.  The financial impact, meaning whether cloud spending will erode IT spending, is particularly difficult, because it depends on a number of factors where popular opinion seems to diverge from objective reality.  When that happens, I try to model the thing, and that’s what I’m going to talk about today.  One thing the modeling did was convince me that the technology and financial impacts of the cloud are, if not in lock-step, at least somewhat synchronized.

In the early days of the cloud, the popular vision was that “everything” was going to “migrate to the cloud.”  That was never going to happen, nor is it going to happen now.  If it were true, then cloud-provider spending would have risen, and enterprise IT spending fallen, to respond to the shift in hosting location.  The fact that this vision was…well…dumb doesn’t negate the fact that it was newsworthy and simple in terms of predicting market trends.

The truth is way more complicated.  Enterprise IT spending has three components, cloud-wise.  The first component, which accounts for about 75% of current enterprise IT spending, is core business applications that aren’t going to migrate to the cloud for security, compliance, and pricing reasons.  The second component, which is the other 25% of current IT spending, are non-core applications that could just as well be cloud-hosted as data-center-hosted, and whose migration will depend on pricing.

If we stuck with the move-to-the-cloud technical paradigm, two things would happen.  First, we’d top out with cloud spending about a quarter of IT spending overall.  Second, overall IT spending would tend to rise a bit slower than GDP, which doesn’t generate all that much excitement in the tech world or on Wall Street.  What could help us do better?  The third component.

Since I’ve used up all 100% of current enterprise IT spending, you might wonder what’s left to say about the third component.  That component is itself divided.  One piece represents application modernization to accommodate web-portal access to core applications, mobile worker support, and so forth.  The other represents productivity enhancement applications not practical or even possible pre-cloud.  We’re currently in the early stages of realizing that first piece, which represents an increase in IT spending of around 25% from current levels.

The second piece has not yet seen any statistically significant realization, and only a very small fraction of either buyers or vendors even understand it.  Nevertheless, it’s this piece that represents the real plumb, an incremental 70% in IT spending growth potential.  Most of this revolves around the exploitation of context, which I’ve blogged quite a bit about.

These components of cloud impact aren’t the decisive drivers, at least at present, for spending trends.  Enterprise IT spending has always had two elements, one being the orderly modernization of existing resources, and the other being the resources needed for new projects that delivered incremental business benefits to justify incremental costs.  Budget and project money, in short.  The latter grows and shrinks in value cyclically, depending on the introduction of new IT paradigms that can fuel investment.  We’ve not had such a cyclical augmentation since the late ‘90s, by far the longest delay in IT history.

Put in this light, what the cloud’s second productivity-based piece would involve is igniting another of those cycles.  Absent one, what we’re seeing is less a “shift to the cloud” than a complicated cost-based approach to the first piece of cloud impact, application modernization.

When you do mobile/web enhancements to core applications, you introduce the need for a “front-end” piece that should be more resilient and elastic-with-load than typical core applications.  To host this stuff in the data center would involve creating a corresponding elastic resource pool.  Core applications don’t generally need that level of elasticity, so the size of this pool would be small (limited to the front-end elements) and it would be less resource-efficient than the public cloud.  As a result, a fair percentage (a bit less than two-thirds, says the model) of these front-end elements are going to the cloud.

What this has done, and will continue to do, is to cap the levels of enterprise IT spending on data center resources, including servers, software, and networking.  When vendors say that the enterprise is cautious in spending, what they’re saying is that the productivity-project piece hasn’t developed, and the tactical front-end piece is being tapped off by the cloud.  Little movement in the net, and so little or no growth in the budget, and more price pressure on vendors.  This is likely to prevail, says my model, through 2021.

The first force acting to change this stasis is the need to operationalize applications that have both a front-end cloud piece and a back-end data center piece.  This is what “hybrid cloud” is really about these days, and the need for this was first recognized only last year.  Hybrid cloud has shaped the offerings of the cloud providers and spawned data-center-centric visions of hybrid cloud from players like IBM/Red Hat and VMware.  To get full operations compatibility in the hybrid cloud, you need to adopt a container model, Kubernetes deployment, and unified monitoring and management.  This positive force on data center IT spending is already operating, but since it’s not particularly hardware-centric, the impact isn’t easily seen.

The second force operating on IT budgets is the gradual retro-modernization of application front-ends to make them cloud-resident.  My model says that about 40% of web/mobile modernization projects have already resulted in increased data center spending, because they were realized in-house and not in the cloud.  In almost all cases, these early appmod players regret not having taken a cloud focus, and they’re trying to work a change through the budgeting.  This is already having a somewhat suppressing effect on data center spending, because moving the front-end pieces into the cloud frees capacity in the data center, meaning less incremental gear is needed to compensate for demand growth.

The third force, the one that will be the first major spending policy shift, could be characterized as the tail/dog boundary function.  The cloud is elastic, the data center core application transaction coupling to the cloud is far less so, if at all.  As a result, there’s pressure created on the boundary point, the application components just inside the data center.  These, by this point, will be under common orchestration/operations control from the hybrid cloud operationalization step I noted above, but some level of elasticity will now be included, and some data center changes will be needed to support the greater elasticity of these boundary components.  By 2021 this force will be visible, and by 2022 my model says it will have added enough to IT spending to reverse the current stagnation, raising budgets a modest two to three percent.

This is the time when it would be helpful to have that productivity-project benefit piece laid on the table.  The technical architecture to support it should be the same as would be needed to add elasticity to the cloud/data-center boundary, and it’s within the scope of current “Kubernetes ecosystem” trends to create it.

The Kubernetes ecosystem is a necessary condition for our productivity-project benefits to boost IT into another of those cyclical growth periods that filled five decades before petering out in 1999.  It’s not a sufficient condition, because we still need to create that “virtual world” I’ve blogged about.  Further productivity enhancements can only come by continuing the trend that all the prior IT waves were based upon, which is getting IT empowerment closer to the work.  Robotics, AI, augmented reality, edge computing, IoT, and 5G all depend on this single fact.  To have computers and information technology help workers, they have to share a world.  To make that work, we have to create a bridge between us and the world we inhabit, and the information world we live in parallel with.

This is the bridge to greater enterprise IT spending, that potential for 70% spending growth.  On the current side of the bridge is the old model of computing, which centralizes information collection and processing, then distributes it as a set of information elements designed to help workers.  On the future side is a newly created game-like artificial or augmented reality, one that frames work in a way that unites everything from the real world and from the old IT world.  This is what my modeling says would be the real future of both the cloud and enterprise IT, and of course all the vendors and users in the space as well.

This future is also a bit like the old networking “god-box”, the device that did everything inside a single hardware skin.  People quickly realized that 1) the god-box was going to be way too complicated and costly unless the future was totally chaotic, and 2) that since consensus on handling chaos is difficult to achieve, the real market was likely to sort out smaller options, making the box less “god-like” in mission.  What we’re seeing in the enterprise IT space is something like this sorting out, because vendors can’t sell and enterprises can’t buy something that revolutionary.

Well, they had some issues with buying past revolutions in IT as well.  On the average, the industry could only sustain previous revolutions at a rate of one every fifteen or twenty years.  It’s been 20 years since the last one, so in one sense, we’re due.  The challenge is that this virtual-world revolution is a lot more complex.  It’s an ecosystem that envelops everything, or it falls short of optimality.  Modeling, sadly, doesn’t tell me how we could get through it.

Prior IT revolutions happened because a single strategic vendor created a vision, and was willing to take a risk to advance it.  I wonder what vendor might do that today?  I’d sure like to see someone take it up, because an industry generating 1.7 times the current level of revenue in information technology would likely be a gravy train for us all.