What’s Really Behind Amazon’s New “Premises-Cloud” Push?

Amazon has been working hard to make the cloud more than just outsourced server consolidation, and its latest move might be its most significant.  They’ve announced a distributed platform (hardware and software) that can extend some important AWS API services to nearly anywhere—not only the data center but potentially anywhere you can run Ubuntu (or Amazon) Linux.  It’s not exactly how it’s being described in many online stories, but in some ways, it’s more.  To get to reality we need to look at what Amazon announced, and what’s happening in the cloud.

The basics of Amazon’s announcement are simple.  The software component is called “Greengrass” and it’s designed to provide an edge hosting point for Amazon’s IoT and Lambda (functional programming) tools, to permit forward placement of logic to reduce the impact of propagation delay on control loops used in a number of process automation and communications applications.  The hardware is called
“Snowball Edge”, and it’s a secure, high-performance, edge appliance for the Snowball high-speed data service Amazon has offered for some time.  Snowball Edge offers corporate users the ability to stage large databases securely in the cloud.  Snowball Edge appliances can also run Greengrass, which makes the combination a nice choice for edge event management and collection.

All of this is logical given the fact that cloud computing is now clearly evolving.  We are not going to “move” applications to the cloud in the current thinking, we’re going to write applications for the cloud.  Since the general trend in applications has been toward the worker, new cloud applications would probably be ones designed to push IT directly into their hands, at the point of activity.  That means applications would have to be more real-time, more distributed, to be responsive to worker needs.  In short, we’re actually moving toward enterprise software that looks a bit like a multi-level hierarchy of processes, with simple ones at the edge and complex ones deep inside.

For Amazon, in a technical sense, the goal here is simple; if you have a cloud data center that’s located hundreds of miles from the user (or even more) then you have the risk of creating enough of a lag between event reception and processing that some real-time tasks are not going to be efficient, or work at all.  Amazon’s two services (IoT and Lambda) are both designed to support that very kind of real-time application.

The details of the target features are important, IMHO.  Most of the IoT features Amazon offers relate to device registration and control, which logically are edge functions.  Millions of devices (hypothetically) could swamp centralized facilities.  Lambda services are really a part of a software evolution toward pushing logic to the point of need.  A Lambda function is a nubbin of code that is fully, truly, stateless in its behavior and can be deployed in any number of instances anywhere it’s useful.  You have simple features assigned to simple tasks.  There’s no fixed resource assignment, either.  Lambdas float around in the cloud, and from a charging point you pay only for what you use.  They’re also simple; there’s evidence that you could make Lambda functional programming populist enough for end-users with some tech savvy to perform.  All these factors make it ideal for distribution to the edge.

Amazon has no real edge presence, of course.  They aren’t an IT company so they don’t sell computers or application software.  They don’t own any mobile phones (their phone wasn’t a success).  Could they push functionality to a Fire tablet?  Sure, but not unless they developed an approach to function distribution that’s general, simple, and applicable to near-term needs.  Sounds like IoT and real-time, like Lambdas and data caches.

The key point here is that this initiative is very focused.  Amazon is not trying to get into IBM’s or Microsoft’s computing business or trying to replicate their cloud computing model.  Such a move would be a giant step and probably a giant cost, risk, and error for Amazon (who has enough on its plate as it is).  I think the Greengrass/Snowball combination is really aimed specifically at IoT and real-time cloud, and I think that if there’s a competitive thrust to it, the trust is against Microsoft first and Google second.

Microsoft Azure is a platform-as-a-service cloud, and as such it naturally extends itself onto the premises, which means that “Azure components” are also Windows Server components.  That makes it easy for Microsoft to build real-time applications that employ “shallow” and “deep” processes.  If you look at Google’s Cloud Platform material on IoT, you see the same basic features that Microsoft and Amazon have, but you also see emphasis on Google’s fiber network, which touches most of the major ISPs in peering for content delivery.  That gives Google a low-latency model.

IoT, if anyone ever gets a useful handle on it, would be the largest cloud application driver in the market.  No cloud provider could hope to survive without a strong position in it, and that includes Amazon.  Thus, having a platform to push to the cloud edge and aligning it explicitly with IoT and real-time applications is essential if Amazon is to hold on to its lead in the cloud.  Remember that Microsoft already does what Amazon just did; the only thing that’s prevented IoT and real-time from shaking the leadership race in the cloud has been lack of market opportunity.

I’ve said in a number of forums that we are starting to see the critical shift in the cloud, the shift from moving to it to writing for it.  It’s that shift that creates the opportunity and risk, and since that shift is just starting to happen Amazon wants to nip its negative impacts in the bud and ride its positive waves.  That means that this cloud market shift is important for everyone.

First, forget any notion of “everything in the cloud”, which was silly to start with.  What the future holds is that model of process caching that I blogged about before.  You push processes forward if they support real-time applications or if they need to be replicated on a large scale to support workload.  You pull them back when none of that is true, and it’s all dynamic.  We have a totally different development model here.

Second, functional or Lambda programming isn’t just some new software gizmo to titillate the geek kingdom.  We’re moving from a model where we bring data to applications, toward one where we bring applications (or their components) to data.  The logical framework for that is the highly scalable, transportable, Lambda model of software.  Lambdas, though, are more likely to be tied to real-time push-it-forward components than to the whole of an application, which says that components will become increasingly language-and-architecture-specialized over time, with shallow and deep parts of software done differently.

Third, while Amazon doesn’t want to compete with IBM and Microsoft in their basic data center business, they probably do cast covetous eyes at the IT business overall, in the long term.  What Greengrass/Snowball show is that if the mission is process caching, then a cloud-centric approach is probably the right one.  Premises software and hardware opportunity won’t be captured by Amazon by competing in those spaces, but by making those spaces obsolete.  By subsuming them into the cloud, and thus trivializing them as just another place to host cloud stuff.  Marginalize what you cannot not win.

What this shows most of all is that we have totally misread the cloud revolution.  The original view of just moving stuff to the cloud was never smart, but even the realization that software written for the cloud would be the source of cloud growth undershoots reality.  The cloud is going to transform software, and that means everything in IT and consumer tech will be impacted.  It’s going to ruffle a lot of feathers, but if you like excitement, be prepared.