In just the last week, we’ve had cloud-related announcements that seem to suggest a drive toward harmonizing cloud and data center around a single architecture. Amazon has an alliance with VMware, Microsoft is further improving compatibility and synergy between Azure and its data center elements, Google is expanding its Nutanix relationship for data center harmony, and Oracle is touting its Cloud at Customer offering. What’s up here?
First and foremost, virtually all cloud providers realize that moving applications to the cloud isn’t going to bring much cloud success. The future of the cloud is applications that are developed to exploit the cloud, meaning new development. Those applications, because they do focus on cloud-specific benefits, usually employ cloud services hosted by the provider, beyond simple IaaS and maybe some database stuff. Thus, the public cloud has been gradually turning more “PaaS-like” in its software model.
The second issue is that exploiting the cloud doesn’t mean moving everything to it. There are a bunch of good reasons why companies will drag their feet with cloudifying many elements of current and future applications. The future cloud is a hybrid, in short. But if that’s true, then how do you deal with the cloud-hosted features you’ve come to rely on, when the piece of application you’re looking at has to run in your own data center?
Microsoft, whose Azure PaaS platform always had a lot of affinity with its data center Windows Server stuff, has been quietly gaining traction as enterprises realize that in the cloud era, it’s really going to be about creating apps that are part-cloud and part-data-center. With the advent of IoT and increased emphasis on event processing, a data center presence gave Microsoft a way of adding at-the-edge handling of short-control-loop event apps, and Amazon was forced to offer its Greengrass unload-to-the-customer edge strategy as a counterpoint. All the other stuff I cited above continues this trend.
For all the interest in this kind of hybridization, there’s no real consensus on just what it requires in terms of features, and even on whether you achieve cloud/data-center unity by pushing pieces of data center features into the cloud, pulling cloud features into the data center, or both. All of the current fusions of cloud and data center seem to be doing a little of both, preparing perhaps for the market to make its own requirements clear.
That may take a while. The enterprises I’ve talked with believe that applications for the future hybrid cloud are emerging, but there’s a bit of chicken-and-egg tension happening. It’s difficult for enterprises to commit to a strategy for which there’s no clear implementation consensus. It’s difficult for that consensus to arise without somebody committing to something, and in decent numbers. The vendors will probably have to take some initiative to drive things forward.
The Amazon/VMware deal is probably the one with the greatest potential to drive market change, given Amazon’s dominance in the public cloud. Unfortunately, we don’t have anything more than rumor on what the deal includes at this point. The story I’ve heard is that Amazon would provide a VMware-based hosting capability for many or all of the AWS web services it offers in the cloud. This would roughly mirror the Azure Stack notion of Microsoft.
Next on my list of influence drivers is the Google deal with Nutanix, largely because it embodies a function transfer from data center to cloud and not the other way around. Nutanix is best known as a VMware competitor in on-prem virtualization, the subject of a few spats with VMware over time. If Google wants to create a functional hybrid with feature migration, they need to have a partner who is interested. Amazon’s dealings with VMware have already created a bridge into AWS from VMware, so it makes sense for Google to start with that as well.
At the very least, all of this demonstrates that you can’t have “public cloud” as a polar opposite of the data center. At the most, it suggests that the cloud and the data center have to be in a tight enough partnership to require feature-shifting among the two. If that’s the case, then it impacts how we design applications and also how clouds and data centers interconnect at the network level. Either of these impacts would probably delay widespread adoption of a highly symbiotic cloud/data center application model.
That seems to be what Google, at least, expects. The first phase of their Nutanix deal, which lets apps migrate from the data center into Google’s cloud, isn’t supposed to be ready till next year. However, remember that Google has a lot more edge-like resources in their public cloud than anyone else, and they also have lower latency among the various hosting points in the Google cloud. Thus, they could satisfy edge-event-processing requirements more easily in their own cloud than most other cloud providers.
What about those changes to the application development process and the network connectivity between cloud and data center? Let’s summarize those two issues in order.
The goal of “new” application designs should be to separate the flow of transactions so that critical data processing and storage steps will be toward the end of each flow, which can then be hosted in the data center. The front-end processes that either don’t need to access repository data at all, or can access read-only versions, could then be cloud-hosted. It’s also possible that front-end processes could use summary databases, or even forego database access. For example, it might be possible to “signal” that a given commodity is in inventory in sufficient quantity to presume that transactions to purchase it can go through. Should levels fall too low, the front-end could be “signaled” that it must now do a repository dip to determine whether there’s stock, which might move that application component back along the workflow into the data center.
On the network side, cloud computing today is most often connected as a remote application via the Internet. This isn’t going to cut it for highly interreactive cloud components that live in the data center sometimes too. The obvious requirement is to shift the cloud relationship with the VPN to one of total efficient membership. In effect, a cloud would be treated as another data center, connected with “cloud DCI” facilities. Components of applications in the cloud would be added to the VPN ad hoc, or would be hosted on a private IP address space that’s then NATed to the VPN space.
Google has the smartest approach to next-gen cloud platforms of anyone out there, in my view. They have the smartest view of what a next-gen network looks like too. Are they now, by taking their time in creating a strong data center hybrid strategy, risking the loss of enterprises because the next-gen applications and network models for a hybrid could be developed before Google is an effective player? That could be an interesting question.
Also interesting is the question of whether there’s a connection between all of this and Juniper’s decision to pick up Bikash Koley, a well-known Google-networking expert who played a strong role in the development of Google’s network/SDN approach. Might Juniper want to productize the Google model (which, by the way, is largely open)? We’ll see.
One thing is for sure; the conception of the cloud is changing. The new one, which is what the conception should have been all along, is realistic and could drive a trillion-dollar cloud market. For the first time, we might see an actual shift in computing, away from the traditional model. For vendors who thought that their problems with revenue growth were due to the cloud, this isn’t going to be good news. The cloud is just getting started, and it’s going to bring about a lot of changes in computing, software, and networking.