Looking at Enterprise IT in 2019

Enterprises have their own plans for IT and networking for 2019, and since I’ve done a blog on the service provider side, I also want to do one for enterprises.  My own enterprise survey process isn’t as comprehensive as for the network operators, so I’m also commenting on predictions made by the financial industry to provide broader input to this blog.

Among the enterprises I’ve interacted with in the last quarter, the most interesting point is that there’s a considerable amount of agreement across all the verticals.  Enterprises are looking to make more and better use of the cloud in 2019, and in fact that’s their highest IT priority.  A close second, and related to the first, is the goal of improving application lifecycle automation, meaning the automation of the deployment, redeployment, and scaling of applications.  Third is the desire to improve hosting efficiency within the data center, and forth is to incorporate AI or machine learning (enterprises are fuzzy on what the distinction is, or even whether there is one) in applications and operations.

At the core of this combination is the fact that enterprises are only now understanding the real mission of the cloud.  It’s not to replace traditional data centers in hosting mission-critical apps, or in fact to replace data centers for hosting many non-mission-critical apps.  It’s for hosting application components that are more tied to data presentation to users/workers than to transaction processing.  The former is a piece of the latter, and so the cloud is a piece of hosting strategy—what we call “hybrid cloud” is the order of the day.

Wall Street research picks up much of this, but in a more technology-specific form.  They see container technology and data center efficiency (hyperscale, networking) as key elements, for example.  Overall, though, Street research validates the notion that hybrid cloud is the model, to the point where it eclipses public cloud in most analysts’ view.  The hybrid shift favors Microsoft (who has always had a better private cloud story).  They also see a shift in application development practices and data center hosting models to support hybridization, and a shift in operations behavior to support efficient management of hybrid hosting.

The hybrid cloud mission is thus perhaps the most significant single driver/factor in IT spending in 2019.  It’s going to reshape how applications are developed, encouraging containerization, stateless behavior to enable easy scalability, and an expanded view of orchestration.  In the networking area, it’s the primary driver of change for data center switching, the primary motivator for virtual networking in the data center, and a major driver for SD-WAN and SD-WAN feature evolution.

Behind the hybrid cloud, in a technical sense, is a broader and at the same time more thoughtful application of virtualization.  The problem with specific hosting technology and specific application requirements is the specificity; it makes it difficult to create broad pools of resources and fit an increasingly complicated set of operations tools to the varied environment you’ve created.  Virtualization presumes that you’ll have abstractions of applications hosted on abstractions of resources, connected through abstract network services.  If all the tools operate on and through the abstractions, then a single straightforward toolkit fits practically everything, and works the same way for everything as well.

Enterprises are moving to this new virtualization approach, but my research suggests that they’re doing their moving largely by accident rather than through deliberate planning.  Most enterprises don’t really see a holistic goal or approach, in fact.  They’re addressing the issues of the hybrid cloud as they encounter them.  It’s fortunate for enterprises that the open-source movement has largely unified the technical goals and is developing toward them.  Otherwise we might well be creating a bunch of silos instead of breaking them down.

The open-source frameworks (particularly for containers and orchestration, meaning Kubernetes and Apache Mesos/Marathon) have also provided glue to hook in public cloud and even proprietary software tools that would otherwise have tended to be too tactical to be helpful in hybrid cloud.  Various SDN concepts, both open and proprietary (and including SD-WAN) are providing a strong network glue to bind distributed components.

On the hardware side, most enterprises agree that it’s important to unify their hosting platforms, converging on a compatible set of server CPU options and also a fairly unified OS and middleware mixture.  Containers are helpful in that they frame a portable hosting slot model.  Network hardware doesn’t need unification as much as a common overlay model (SDN and SD-WAN, in the form of a highly flexible virtual network and network-as-a-service) and high-speed connectivity, both within each data center and among data centers.

The hardware side of things needs to be matched to the software a bit better.  The hybrid cloud solution to hardware resource mapping to a virtual abstraction is still evolving.  One thread is a “control-plane” or “cluster” extension that lets orchestration tools (Kubernetes) map data center and cloud resources as different clusters in the same pool.  A set of “data-plane” approaches seeks to unify the resource pool through a universal abstraction and network connectivity (Apache Mesos).  Data center vendors are not touting any particular affinity with either approach.

That brings up the opportunities and risks.  Red Hat/IBM could be a major beneficiary of the hybrid cloud shift for a bunch of very obvious reasons.  Red Hat’s OpenShift is perhaps the best-known “productized” container application platform; it contains Kubernetes.  OpenShift could be a vehicle for IBM to take a really big position in the emerging hybrid cloud space.  VMware, perhaps a bit preoccupied by the drama of Dell’s possible reverse merger (now perhaps laid to rest), is advancing its basic network and hosting tools, but without the clear framework of OpenShift.  HPE similarly has all the right pieces but lacks ecosystemic marketing.

How about networking?  VMware is, for the network community, the most interesting of the possible hybrid cloud play-makers.  Their NSX virtual-network tool, augmented though a bit feebly by the Velocloud acquisition, could be seen as a universal virtual-network solution if it’s enhanced further.  Nokia/Nuage already has a comparable product, but they’re far less a player in the enterprise space than the service provider space.  Juniper, through its acquisition of HTBASE, might have enough critical virtual-network assets to play in hybrid cloud, but they seem stubbornly committed to “multi-cloud” when the term is almost always used to refer to users with more than one public cloud provider, not to those with a public-cloud-and-data-center strategy.

This is a good point to raise the question of open-source technology for enterprises.  My research says that enterprises strongly prefer open-source platform software to proprietary software.  However, most enterprises want to get that platform software from a source that bundles support.  They don’t insist on “purist” open-source at all, and in fact many don’t know anything about the topic or even the different classes of open-source licensing.  Open-source gets good ink, it’s “free” (except for the support that they’re expecting to pay for), and most of all it’s not vendor-specific.  In short, open-source is insurance against vendor lock-in.  That view is expanding among enterprises; the number who think open-source is the best protection against lock-in has doubled in just the last five years.

On the network side, it’s different.  Enterprises are not as committed to “open” network technology like switches or routers as the operators and cloud providers are.  The reason that’s important is not only that it means enterprise network equipment sales is less likely to be eroded by open platforms, but also that the virtual-network technology that’s essential to the cloud is not automatically expected to open-source.

The software framework of 2019, then, is set by open-source.  Applications and some specialized tools can still be proprietary as long as they integrate with or operate on the open-source platform overall.  That means that, from the bottom up, we’re setting expectations for software and IT that’s not going to provide vendors with automatic lock-in once the first deal is done.  That may bring about massive changes in buying practices in 2019, and it’s certain to impact 2020 and beyond.