Serverless Computing, the “No Machine”, and the Cloud/Network Relationship

What is “cloud computing?”  There have been two implicit, competing, contradictory, definitions up to now.  The first is that it’s “traditional computing hosted in the cloud”.  That implies that the value the cloud brings is largely cost displacement.  The other is that it’s “a computing paradigm designed to support virtual, dynamic, processes using a pool of virtual resources hosted anywhere, on anything.”  That implies that computing in the cloud has to be made for the cloud.  From the inception of the cloud, the first definition prevailed.  This year, it’s losing its hold quickly, and by the end of the year that old notion will be gone.

This is great if you’re a writer/editor, because it gives the whole cloud thing a new lease on life.  Nothing is worse to a reporter than having to rehash the same tired old points because nothing new is coming along.  “News”, after all, means “novelty”.  For the technologists on the seller and buyer side, though, the new cloud definition poses major problems in realization.  Eventually even the reporters and editors will have to contend with that issue, and nobody is going to find it easy.

“The cloud”, in our second definition, defines a truly virtual future.  Applications are made up of virtual processes or components.   Things run on virtual resources.  The mapping between the abstractions that are manipulated both to build and run things are essentially things that connect smoke and fog.  What we do today to build and run applications connect real things, entities that are somewhere specific and are addressed by where they are.

In an application sense, up to now, virtualization has focused on making something abstract look real by making the connection to it agile enough to follow the abstraction to where it happens to be.  We have a whole series of location-independent routing concepts that accomplish that goal.  The true cloud model would have to think differently, to identify things by what they do and not where they are.  Call it “functional mapping”.  We see an example of function mapping in a very simple way in content delivery networks.  The user wants Video A.  They have a URL that represents it, but when they click the URL they actually get an IP address of a cache point that’s optimal for their particular location, etc.

A more generalized approach is offered by Linkerd, a concept that’s been described as Twitter-scale operability for microservice applications.  Linkerd (pronounced “Linker-Dee”) was the focus of a recent announcement by startup Buoyant, and it provides what Buoyant calls a “service mesh”.  The idea is to provide a process communications service that acts as a proxy between the request for a microservice/process and the fulfillment.  Instead of just doing API management, Buoyant/Linkerd adds policy control and load balancing to the picture.

By integrating itself with instance-control or scalability APIs in the container or cloud software stack, Buoyant/Linkerd can allow an application to scale up and down, while distributing work to the various instances based on a variety of strategies.  The load-balancing process also lets an application recover from a lost instance, and since the technology came from Twitter, it’s proven to be efficient and scalable to massive numbers of users.

There are still a few issues, though.  At this point, the Linkerd model doesn’t address the critical question of state or context.  Many software components are written to support multi-message exchanges where the processing of the entire sequence is needed for any given message to be processed correctly.  These components can’t be meshed unless they’re designed not to save state information internally, and that opens the question of “functional programming” that doesn’t allow for saved state and thus can support service meshing and the instantiation or replacement of services without any context issues.

Both Microsoft and Amazon have functional programming support in their cloud, and Amazon’s is called “Lambda” because the programming/technical term for functions that don’t save state is “lambda function.”  You can run a Lambda to process an event, and it can run anywhere and be replicated any number of times because it’s stateless.  Amazon charges for the activation and the duration of the execution, plus for the memory slot needed, and not for a “server” which gives rise to the notion of serverless computing.

It’s easy to get wrapped around the notion of serverless computing because it’s a different pricing paradigm, but that’s not the news in my view.  I think “serverlessness” is a key attribute of the future cloud because it breaks the notion that software has to be assigned to a server, even a virtual one.  Amazon may think so too, because it’s recently announced wrapping Lambda in a whole ecosystem of features designed to make application hosting in the cloud serverless and at the same time deal with the nagging problem of state.  The future of the cloud is less “virtual machines” than no machines.

Amazon’s Serverless Computing has about nine major elements, including Lambda, orchestration and state management, fast data sources, event sources, developer support, security, and integration with traditional back-end processes.  There’s an API proxy there too, and new integration with the Alexa speech and AI platform.  It’s fair to say that if you want to build true future-cloud apps Amazon provides the whole platform.

You can see where this is heading, and probably some of the issues as well.  We really do need some new compute models for this, and one possibility is a variation on the notion of source routing and domain routing that came along (of all places!) with ATM.  Envision this lonely event popping into the process domain somewhere.  A lambda function picks it up because it’s associated with the event type, and based on some categorization it prepends a rough process header that perhaps says “edit-log-process”.  It then pushes the event (and header) to another “edit-front-end” lambda.  There, the “edit” step is popped off and a series of specific steps, each represented by a lambda, are pushed on instead.  This goes on until the final microstep of the last “process” has completed, which might generate a reply.

In this approach, the “application” is really a set of steps that are roughed out at the event entry point and refined by a gatekeeper step for each of the major rough steps.  Nobody knows where anything is, only that there are events and processes that are swirling around out there in the aether, or waiting to be instantiated on something in the Great Somewhere when needed.  Function-source-routing, perhaps?

We are a long way from understanding and utilizing this sort of approach fully, but clearly not far from dabbling in it to the point where it generates enough opportunity to interest the cloud providers like Amazon.  You can use the model as a front-end to traditional applications, to link them to mobile empowerment or IoT for example.  That will probably be the first exposure of most companies to the “new cloud” model.  Over time, highly transactional or event-driven applications will migrate more to the model, and then we’ll start decomposing legacy models into event-driven, lambda-fulfilled, steps.

This is where the real money comes in, not because of the technology but because what the technology can do.  Event-driven stuff is the ultimate in productivity enhancement, consumer marketing, contextual help, and everything else.  We, meaning humans, are essentially event processors.  When we can say the same thing of the cloud, then we’ve finally gotten computing on our own page.