The Evolution of Public Cloud Services and Applications

Some recent stories on cloud provider growth and total revenue show Amazon well out in front of everyone else, Microsoft in a comfortable second place with faster-than-average growth, and Google and IBM locked in third place.  I’ve noted that a part of the total cloud dynamic is the market segments each provider is addressing.  Amazon has a very strong position with web-based startups, Microsoft has hybrid cloud strength, and IBM and Google are both struggling to find a niche to focus on.  Perhaps everyone on both the provider consumer side are doing the same.

I’m a programmer, software architect, and director of software development by background.  In my view, the challenges of the cloud begin with the foundation of business IT, which is “transaction processing”.  In the early days of IT, nearly everything done by computer was “batch processing” meaning that records of commercial activity were captured and entered into what was essentially a repository of business activity.  The actual activity was offline.

When “online transaction processing” (OLTP) came along, what happened was that the traditional “TP” stuff was augmented with an “OL” part, meaning that applications were extended with the components needed to allow direct human interaction with transaction processing to bypass the batch process.  Personally, I think a better term would have been “real-time transaction processing”, partly because the goal was to connect workers to work in real time, and partly because “online” means “on the Internet” these days.

The split between “OL” and “TP” is really critical for the cloud, because the cloud is really good at web-like, web-related stuff and much less so for linear processes that go from start to finish without human intervention.  The reality of enterprise use of the cloud today is that most of it relates to creating a cloud-hosted front-end to traditional IT applications, meaning it’s an implementation of that “OL” part.

Web server activity tends to be stateless, meaning that if you send an HTTP request for a web page, you could obtain it from any load-balanced set of resources assigned to the URL.  Stateless stuff can thus be scaled on demand, replaced on demand.  In addition, when you make a web request it’s fairly easy to decode it and send the details from the baseline server to a deeper element depending on what’s being asked.  The delay, in the context of human request/response expectations, isn’t critical.  Not only that, you could in theory connect the user to a kind of “storefront” that assembled the “order” by calling a number of backend services and collecting the results.

The “OL” piece, in technical terms, is a good candidate for container hosting and microservices.  Containers are also a good strategy for enterprise server virtualization.  If we assume the two trends converge, then you can see why Kubernetes and various service-mesh and federation extensions to it are suddenly very popular.  Microsoft’s hybrid cloud primacy is less due to its credentials in Kubernetes (which Google developed, after all) than to the fact that it has a foot in both cloud and premises and was more aware of the bridging of the two, both technically and politically.

Event processing and “functional” or “lambda” computing is a further recognition by cloud providers that the real-time space is their natural home.  For all the cloud providers, event-driven applications represent a second front, meaning a class of application that may still ultimately feed “TP” and thus be “OL” in my little sequence, but isn’t tied to web stuff and may have looser ties even to “TP” overall.  More “OL”, then.  Amazon and Microsoft saw (and perhaps still see) functional event processing as a separate “serverless-class” application, but I think both companies are eventually going to merge their serverless and container stories; Amazon IMHO is already showing signs of that.

Google from the first was pushing microservices, and that’s what hurt them.  Microservices are a development/application model, not a business mission.  Of all the cloud providers, they’ve been the slowest to embrace the hybrid cloud model as the dominant approach to the enterprise, the “OL” and “TP” segmentation I’ve talked about here.  Are they perhaps the most insightful about “microservices” as a development and deployment model?  Perhaps, but enterprises don’t tell me of any helpful conversations with Google cloud people on the “OL” versus “TP” or event front.  Still, they could succeed if they worked at it.

IBM is in a different kind of bind after Red Hat.  What’s good for IBM cloud may not be best for IBM.  Red Hat and OpenShift are more and more the container/Kubernetes turnkey platform of choice.  An optimum vision there would make the hybrid-cloud link totally generic, meaning OpenShift would work to deploy an “OL” “TP” pair for an enterprise, and even develop specifically for that pairing and the linkage.  That open approach would make OpenShift a great partner for any cloud provider, not just IBM.

IBM and Google are tied in the cloud, essentially.  IBM’s best option to gain there would be to create a specific bridge between OpenShift and IBM’s cloud, but aimed where?  The “OL” part of OLTP is already highly committed to Microsoft, and Amazon is gaining ground there.  Does IBM meet those two head-on?  Do they look instead at an “OL” that’s not so “TP”-connected, like event processing?  Or do they look elsewhere?  Does Google try to leverage its Kubernetes position more in the “OL” and “TP” front-end-model race, or does it try to go after events too?  Or do one or both look elsewhere?

Carrier cloud is the big imponderable here.  If network operators were to fully realize the potential of the cloud in transformation, they’d generate the largest number of new cloud data centers in the market, and become collectively the largest owner of cloud infrastructure.  However, operators have six specific driver missions; vCPE and NFV, streaming advertising and video service features, IMS/EPC/5G, personalized and contextual services, IoT, and network operator cloud computing services.  The last of these would mean facing the same mission realities as the other public cloud providers, but all the others are more specialized and some wouldn’t look much like containers and microservices at all.  Whatever missions drive carrier cloud, or if none drive it convincingly, the result will be market-changing.

Google’s decision to name a head of a telecom group and focus on that market is likely an indicator that Google at least recognizes that 1) carriers are approaching carrier cloud in a mission-specific way and 2) that means each mission could end up being outsourced to a cloud provider…like Google of course.  If Google could pick up a couple carrier missions, they could gain tremendously in cloud revenue and at the same time hone their skills and tools for another set of cloud applications, a set that might eventually fit into enterprise cloud usage as well.

IBM is a player that operators would love to have in the game.  Unlike Google, IBM isn’t seen as a threat to any operator service future that’s being seriously considered.  They’re strong in IT and software, where operators are almost pathetically weak.  They’re highly credible among C-level executives.  They’re hungry.  But they’re probably on the horns of the dilemma I mentioned earlier in this blog; do they want the IBM cloud to gain, or IBM overall?  I think the latter will be the answer.

A decade ago, when I was involved in the IPsphere Forum, the operators in the body were more interested in working with Google than any other player in the industry.  Bet that’s true today too.