What’s the thing (well, one of the things) I’m sick of hearing? It is that, as Michael Dell and others have said recently, “It’s definitely a multi-cloud world.” Why am I sick of it? Two reasons. First, because it’s never been anything else, and the fact that’s only now recognized is a sad commentary on the industry. Second, because the statement has no value to planners, only to publications who want to sell ads based on story clicks. We’ve missed the technical point by focusing on glitz…as usual.
It’s always nice to put things into a financial perspective. Global total IT spending is about a trillion dollars a year. I ran the cloud through my modeling process five years ago, and it churned out a prediction that the migration of current applications to the cloud could never exceed 23% of that total. Today, total cloud spending is about a tenth of that 23%, and more than half isn’t business applications at all, but web companies serving the consumer market.
What this says is that unless you believe that enterprises have scrapped over 95% of their current IT and gone back to high stools and green eyeshades, the cloud has not displaced the data center. Nor will it, ever. Given that, the data center will always be a “private cloud” supplemented by a “public cloud”, which makes it both a hybrid cloud and a multi-cloud.
The application view of the same picture creates a similar result. What are the top reasons for enterprise use of the cloud? Resiliency and scalability. If I want to use the cloud as a backup resource, to replace something that’s broken or to scale something that’s overloaded, where does the original application live? In the data center, by long odds. Thus, users expect the public cloud to look like an extension of the data center, which is a multi-cloud environment.
Even if you want to say that “multi-cloud” is multiple public cloud providers, that kind of vision is the explicit goal of almost three-quarters of all enterprises I’ve talked with. Most feel that way because they don’t want to be “locked in” to a single provider, but the second-place answer is that they believe that they would find the “optimum” provider for different geographies or different applications to be…well…different.
These are all “why?” reasons to say that multi-cloud is the de facto approach. There’s also a “why not?” reason, meaning that there is a set of technology requirements and trends that would tend to erase any distinction among multiple clouds, which raises the question of why you wouldn’t want to adopt that model. We met one already—users want to be able to move applications and their components freely to wherever they need to be hosted. There are more, and in particular one giant one.
The largest use of public cloud services for enterprises today is as a front-end for business applications. The public cloud hosts web and mobile elements of applications, and it can spin up another to replace or supplement what’s there. Public cloud providers know this and have offered a lot of support for the application, in the form of web services. They are now offering a set of web services aimed at what’s being called “serverless computing”. The right kind of component (a functional process or “lambda” or a microservice, depending on the cloud provider) can be run on demand anywhere, with no reserved resources at all. Wouldn’t “anywhere” logically mean “in any cloud or in the data center?” You can’t believe in serverless without believing in multi-cloud.
OK, hopefully this all demonstrates that anyone who looked at the cloud seriously and logically would have concluded from the first that multi-cloud was where things had to go. What about my second reason?
If you dip into multi-cloud drivers and requirements, what you see is a vision of the cloud as a kind of seamless compute fabric. You want to run something? You make some policy decisions that include QoE, price, security, and so forth, and you deploy. Every option you have doesn’t have its own unique deployment and lifecycle requirements because the differences would make operationalizing the picture impossible. What do you do, then? Answer: You rely on the principles of abstraction and virtualization.
In IaaS services, a “host” is a virtual machine. The services from different public cloud providers or different cloud stacks for the private cloud differ from each other in their management, but they’re all supposed to run things the same way. That property should be even more apparent in serverless computing. In effect, what cloud users want is a kind of “virtual cloud” layer that’s above the providers, describing component hosting and connectivity in a uniform, universal, way. This is what we should have realized we needed from the first, and might have realized had everyone recognized that multi-cloud was where we’d end up (which they should have).
We also need to be thinking about how “serverless” computing is represented at the functional level, as well as how various cloud provider web services are represented. If you want something to be portable, you’d also like for it to be able to take advantage of service features where they’re available, or to limit hosting options to where you can get them. That suggests a middleware-like tool that’s integrated with the virtualization layer to allow developers to build code that dynamically adapts to different cloud frameworks. If we had all of that, then multi-cloud would be a giggle, as they say.
The frustrating thing about the one-two combination of insightless promotion of the cloud is how much it’s probably cost us. We still don’t have a realistic picture of what a true multi-cloud architecture would look like. We don’t have a software development framework that lets enterprises or software houses serving enterprises build the optimum software. Who was the innovator that launched functional/lambda/microservice serverless computing? Twitter. Even today, more than two years after a Twitter blog described their model, most enterprises don’t know about it, what it could mean, and how they should plan to use it.
This has infected areas beyond the enterprise. NFV kicked off in a real sense in 2013, so the Twitter blog came along a couple years after the start. Have we fit the model, which supports what’s approaching ten billion sessions per day, into NFV to make it scalable? Nope. Probably most people involved, in both the vendor and operator communities, don’t even know about the concept.
Nor do they know that Amazon, Google, IBM, and Microsoft now all have serverless options based on that original Twitter concept. The efforts by network operators and network vendors to push networking into the cloud era is falling further behind the players who have defined the cloud era. This may be the last point in market evolution where network operators can avoid total, final, irreversible, disintermediation. NFV will not help them now. They have to look to Twitter’s and Google’s model instead.