The Technical Issue of the New Decade

Having commented negatively on research reports in my last blog, I want to try to overcome their specific issues and summarize, based on real data, the likely market trends for 2020.  Wall Street and (to a lesser and less timely way) government research provides some numbers I can push through my model to see if anything reasonable emerges.  Here are the results.

We are clearly in a broad deceleration in both IT and network spending at the broad market level.  You can see this in the company reports and in Wall Street financial research that reports spending and trends based on financial reports.  There seem to be two factors involved in this, one that’s been going on for some time and one that’s fairly recent—late 2018 into 2019 and beyond.

The long-standing issue is one I’ve noted before.  For almost a decade, both enterprises and network operators have been under pressure in return on infrastructure spending.  The operator issue is manifest in the “profit per bit” squeeze, and for the enterprise the problem is a lack of direct connection between productivity enhancement and IT spending.  I’ll focus on the latter of these two here, since I’ve blogged recently on the operator profit-per-bit issue.

An analysis of government data shows that IT spending growth relative to GDP growth follows a cyclical (sine wave) pattern.  When a new IT paradigm that taps into new benefits emerges, spending growth rises until those benefits are realized, then declines as budgeting becomes conservative and “refresh” oriented.  There have been three distinct cycles since the computer age began, and we fell from the peak of the last cycle right around the millennium.  Since then, while we’ve had significant technical advances in both hardware and software, those advances have yet to create that direct-to-benefits connection.

Without an incremental source of productivity benefits, the focus of buyers is to enhance the capacity of their current IT platform at a lower cost.  Obviously, a lower cost means (at the minimum) reducing capital budgets slightly, in term of dollars per unit of computing power.  What we’re seeing recently is a further push for spending reduction, to the point where about a third of the CIOs I recently talked with say they are actually projecting zero or negative IT spending growth in 2020.

The recent driver of spending change has come as a result of the cloud.  It’s not as simple as saying that “businesses are moving IT to the cloud”, because anyone who works for an enterprise IT organization knows that’s a vast oversimplification.  What’s really happening is the result of a complex dynamic of economy of scale and planning inertia.

Real-time applications, things like order entry, online banking, and the like, tend to have two components.  One is the “back-end” piece that actually updates databases and creates ongoing reports and analysis for management.  This part is typically highly sensitive to the company, its “core business applications”, and is very rarely even thought of as a cloud application.  In fact, my analysis of some user data shows that it would generally be more costly to run these in the cloud because of the way cloud traffic and data access is priced.  The other is the “front-end” that presents the user interface.

There’s a lot of think time in a transaction, from a user perspective, and often a lot of interaction with software simply to set up for what’s being done.  This activity can make up, according to CIO data I got, about a third of the total processing time required, and it scales directly with the pace of user activity.  Transaction processing, in contrast, makes up only about a quarter of back-end activity, the rest being reporting and analytics.

Enterprises figured out quickly that the dynamic of front-end processing fit the cloud model quite well, and as a result, the majority of things “moving to the cloud” have not been applications, but rather the front-end visible part of applications.  You can see this in the shift in emphasis in enterprise programming languages, too.  We’ve gone from a time when c and C++ dominated to a time for JavaScript and Ajax and Python.

As you move front-end stuff to the cloud, you create an uptick in spending for the cloud data center infrastructure, the so-called “hyperconverged infrastructure” or HCI.  You also begin to build headroom in data center capacity, from the offloading of what constituted a third of overall processing.  This reduces incremental infrastructure need, and this is what’s dominated the general negative trend in enterprise IT spending.  It will continue to be a factor in 2020 and beyond…at least through 2022 according to my model.

Where economy of scale comes in is at this point.  The cloud, as a pool of resources applied across many companies and industries, is more efficient in handling variability of workloads.  Thus, if a hundred units of processing power are shifted to the cloud, they’ll consume on the average (according to my model) about 83 units of cloud-power.  If the cloud wasn’t, in its steady-state condition, generating a significantly lower unit cost of processing than the data center, it wouldn’t be worthwhile to move there at all.  If it is more efficient, it follows that cloud infrastructure spending will be below the level of data center spending it’s offsetting.

The complicating factors in all of this are 1) that the early front-end targets are applications not yet modified for mobile/web front-ends, and 2) that data center modernization and refresh still operates on the data center infrastructure.  Almost all the early front-end “migration” to the cloud wasn’t a migration as much as a redevelopment.  In 2020, my model says that new front-end cloud development will come half from new projects and half from actual transmigration.  Thereafter, of course, more and more will come from actually moving something out of the data center.  That doesn’t totally trash data center spending, but it does reduce growth, which is what we saw in 2019 and will see in 2020.

On the networking side, the largest contribution to business spending on network infrastructure is in the data center (switches), followed by branch connectivity devices, CPE.  Since VPN services and SD-WAN services have displaced router-and-trunk self-built networks almost entirely, everything focuses on the VPN edge in infrastructure terms.  CPE is relatively immune from major shifts in requirements, so the largest incremental source of spending is in security, exactly what we’ve been seeing.  However, even that eventually reaches maturity, and that will happen down the line, again likely in 2022.

The overall impact of this is a broad shift from IT and networking dominated by hardware, to a software-centric planning vision.  The major evolution CIOs face isn’t one of “virtual machines” or “containers” in the cloud, it’s one of developing applications to optimize an agile container-centric model of deployment that is both cloud-friendly and also more capital- and operations-efficient in the data center.

Agile applications need to be developed and deployed as agile applications, which is a larger shift than simply deploying in the cloud versus the data center.  While there are plenty of tips and techniques for this, including the microservice-and-mesh model, these are really applicable primarily to front-end components.  This means that efficient front-end cloud development is really a rewrite, which hampers transmigration.  To ease things, it would be helpful to have a development model that was suitable for both cloud and data center, which I think would mean abstracting the notion of a “component” of an application to be either a co-loaded and co-resident component for the data center, or a microservice-and-mesh one for the cloud (and of course, things in between).  We don’t have that model yet.

The same thing is true on the deployment side.  A service mesh of microservices is a truly awful way of doing something like crunching through a mass database; the network-connection of components would introduce enormous accumulated delay.  Kubernetes has means of both targeting specific hosts (nodes) with containers (pods) and avoiding them, and similar means should be provided to allow Kubernetes to create an efficient co-residency where latency between components is a critical issue, and also to provide distributability for load balancing and resilience where it’s more valuable than low latency.

Hardware vendors, particularly server vendors, have been slow to address the data center and application evolution trends, which leaves them to the software giants, notably Red Hat and VMware.  It’s these two companies that, in my view, will frame the future of IT, because the kind of tools they provide and the pace at which they’re provided will set the timeline for IT evolution.

It’s also likely that these software giants will determine whether there is indeed another productivity wave, something to drive the fourth of the IT spending cycles that in the past created industry momentum.  Hardware is passive from an application perspective, you just run stuff on it.  Even platform software is partially passive; as long as it frames our new application development and deployment model, it’s done its job.  The thing that’s never passive is application software, and the vendors who define the model of application-building are in the best position to develop or partner to encourage those new-cycle applications.

It’s hard to identify (reliably, at least) a deep driver to all three of the past IT spending cycles, but in the roughly ten years since I first made that presentation to Merrill Lynch financial experts who’d been called in from all over the world to hear it, I’ve come up with a notion.  The simple truth is that the closer IT and its information resources move to the work being done, the better the return.  Better return, ROI, means more investment.

The trends that could move IT closer to work and workers are mobile empowerment and IoT.  We are, as I’ve noted above, in the process of using the cloud to better couple mobile and web devices to the data center.  While this is helpful in many applications (sales and support, notably), it still isn’t addressing the fundamental work-to-IT relationship, which requires point-of-activity empowerment, a new model for collaboration, and the coupling of “field sensor” information to allow an application to “see” the work environment as well as the worker, and make decisions on what would be most helpful.

None of this is rocket science, folks.  The problem is, I think, that we as an industry have gotten focused on easy money.  It’s almost impossible to get venture support for an “infrastructure” startup, and if we took a popular hot-button technology like AI to ride to funding success, the company would then be pressured to forget this long-term productivity and point-of-activity crap and focus on something that could be linked to social media.

This is what one of those insightful software giants could fix.  Big, established, companies can take a longer view.  IBM was once the master of this, in fact, and Red Hat is one of those insightful software giants (VMware, you’ll recall, is my other candidate).  Even Microsoft might take up the banner here, but until some firm moves the ball on productivity support from IT spending, we’re going to be in a tech market that’s in consolidation mode.  The other would be more fun, so why not go for it?