Amazon Signals a Major Shift in Software and the Cloud

Amazon is making its Greengrass functional programming cloud-to-premises bridge available to all customers, and Nokia has already announced its support on Nokia Multi-Access Edge Computing (MEC) gear.  This is an important signal to the market in the area of IoT, and also a potentially critical step in deciding whether edge (fog) computing or centralized cloud will drive cloud infrastructure evolution.  It could also have profound impact on chip vendors, server vendors, and software overall.

Greengrass is a concept that extends Amazon’s Lambda service outside the cloud to the premises.  For those who haven’t read my blogs on the concept, the Lambda service applied functional programming principles to support event processing and other tasks.  A “lambda” is a unit of program functionality that runs when needed, offering “serverless” computing in the cloud.  Amazon and Microsoft support the functional/lambda model explicitly, and Google does so through its microservices offering.

The challenge that Amazon faced with Lambda was a poster child for the edge/central cloud issue I’ve blogged about (most recently last week).  The most compelling application for Lambda is event processing, including IoT.  Most event processing is associated with what are called “control loop” applications, meaning that an event triggers a process control reaction.  These applications typically demand a very low latency for the obvious reason that if, for example, you get a signal to kick a defective product off an assembly line, you have a short window to do that before the product moves out of range.  Short control loops are difficult to achieve over hosted public cloud services because the cloud provider’s data center isn’t local to the process being controlled.  Greengrass is a way of moving functions out of the cloud data center and into a server that’s proximate to the process.

The obvious message here is that Amazon knows that event-processing applications will need edge-hosting of functions.  Greengrass solves that problem by moving them out of the public cloud, which is good in that it solves the control-loop-length problem and bad in that it denies Amazon the revenue associated with the running of the functions.  To me, this is a clear signal that Amazon eventually wants to offer “edge hosting” as a cloud service, which means that the cloud event-processing opportunity creates such a need, which means that IoT creates it.

There are few credible IoT applications that aren’t related to some kind of event processing since IoT is all about handling sensor-created events.  Thus, a decisive shift toward IoT as a driver of cloud deployments could shift the focus of those deployments to the edge.  This could change a lot of power balances.

In the cloud provider space, edge hosting is problematic because of real estate.  Cloud providers have traditionally focused on a small number of large data centers, not only for reasons of economy of scale in hosting, but to avoid having to acquire facilities in every metro area.  Amazon may be seeing Greengrass as an opportunity to enter the event fray with an “integrated hybrid cloud” approach, where they could license a cloud service that includes the option for premises hosting.  However, facility-based service providers (telcos, ISPs, cablecos, etc.) would have edge-hosting-ready real-estate to exploit, and that could force the traditional cloud providers to look for their own space.

On the vendor side, edge hosting would be a boon to the chip vendors, particularly vendors who focus not on chips for “servers” but chips designed to run the more compute-intensive functional programming components associated with event processing.  The event-cloud model could look like a widely distributed set of compute nodes, requiring what could be millions of new chips.

At the same time, edge hosting divides the chip opportunity, or perhaps even totally changes it.  Functional programming is highly compute-intensive, to the point where strict adherence to its principles would make it a totally compute-driven process.  General-purpose server chips can still execute functional programs, but it’s likely that you could design a function-specific chip that would do better, and be cheaper.

At the server design level, we could see the possibility of having servers made up of more of these specialized chips, either by having dense multi-chip boards or by having a bunch of “micro-boards” hosting a traditional number of chips per board.  The combination would provide an entry point for a lot of new vendors.

This shift would favor (as I pointed out last week) network equipment vendors as providers for “hosting”.  A network-edge device is a logical place to stick a couple compute boards if you are looking for event processing support.  This wouldn’t eliminate the value of and need for deeper hosting, since even event-driven applications have back-end processes that look more like traditional software than like functional programming, but it would make the back-end the tail to the event-edge dog.

On the software side, event-focused application design that relies on functional programming techniques could shift the notion of cloud applications radically too.  You don’t need a traditional operating system and middleware; functional components are more like embedded control software than like traditional programs.  In fact, I think that most network operating systems used by network equipment vendors would work fine.

That doesn’t mean there aren’t new software issues.  Greengrass itself is essentially a functional-middleware tool, and Microsoft offers functional-supporting middleware too.  There are also special programming languages for functional computing (Haskell, Elm, and F# are the top three by most measures, with F# likely having a momentum edge in commercial use), and we need both a whole new set of middleware tools and a framework in which to apply them to distributed application-functional design.

The issues of functional software architectures for event-handling are complicated, probably too complicated to cover in a blog that’s not intended purely for programmers.  Suffice it to say that functional event programming demands total awareness of workflows and latency issues, and that it’s probably going to be used as a front-end to traditional applications.  Since events are distributed by nature, it’s reasonable to expect that event-driven components of applications would map better to public cloud services than relatively centralized and traditional IT applications.  It’s therefore reasonable to expect that public cloud services would shift toward event-and-functional models.  That’s true even if we assume nothing happens with IoT, and clearly something will.

What we can say about functional software impacts is that almost any business activity that’s stimulated by human or machine activity can be viewed as “event processing”.  A transaction is an event.  The model of software we’ve evolved for mobile-Internet use, which is to front-end traditional IT components with a web-centric set of elements, is the same basic model that functional event software implements.  Given that, it is also very possible that functional logic will evolve to be a preferred tool in any application front-end processes, IoT and machine-driven or human-based.

That means that Amazon’s Greengrass might be a way for Amazon to establish a role for itself in broader IT applications.  Since Amazon (and Microsoft, and Google) also have mobile front-end tools, this might all combine to create a distinct separation of applications between a public-cloud-front-end component set and a traditional data-center-centric back end.  This, and not the conversion of legacy applications to run in the cloud, would then be the largest source of public cloud opportunity.

A “functional cloud” would also have an impact on networking.  If we assume that event processing is a driver for the future of cloud services, then we have to assume that there is a broad need to control the length of that control loop, meaning network latency.  Edge-hosting accomplishes that for functional handling that occurs proximate to the event source, but remember that all business applications end up feeding more traditional deeper processes like database journaling.  In addition, correlation of events from multiple sources has to draw from all those sources, which means that the correlation has to be sited conveniently to all, and have low-latency paths.  All of this suggests that functional clouds would have to be connected with a lot of fiber, that “data center interconnect” would become a subset of “function-host-point interconnect”.

Overall, the notion of a function-and-event-driven cloud could be absolutely critical.  It would change how we view the carrier cloud because it would let carriers take advantage of their edge real estate to gain market advantage.  It’s been validated by all the major public cloud providers, including OTT giant Google.  Now, Amazon is showing how important edge hosting is.  I think it’s clear that Amazon’s decision alone would carry a lot of predictive weight, and of course it’s only the latest step on an increasingly clear path.  The times, they are a ‘changing.