Google Steps Into Lambdas: What More Proof Do We Need?

I write a lot about things that aren’t mentioned often elsewhere, and that might rightfully make you wonder whether I’m just off in the lunatic fringe.  I did a series of blogs talking about the shift in software in general, and the cloud in particular, to “functional” or “Lambda” programming, and a few of you indicated it was a topic they’d never heard of.  So, was I on the edge or over it, here?  I think the latest news shows which.

Google, finally awakening to the battle with Amazon and Microsoft for cloud supremacy, is making changes to its cloud services.  One of the new features, announced at Google’s Cloud Next event, is extending Google’s “elastic pricing” notion to fixed machine instances, but the rest focus on functional changes.  In one case, literally.

Even the basic innovations in Google’s announcement were indicators of a shift in the cloud market.  One very interesting one is the new Data Loss Protection, which takes advantage of Google’s excellent image analysis software to identify credit card images and block out the number.  There are other security APIs and features as well, and all of these belong to the realm of hosted features that extend basic cloud services (IaaS).  In combination, they prove that basic cloud hosting is not only a commodity, it’s a dead end as far as promoting cloud service growth is concerned.  The cloud of the future is about cloud-based features, used to develop cloud-specific applications.

Which leads us to what I think is the big news, the service Google calls “Cloud Functions”.  This is the same functional/Lambda programming support that Amazon and Microsoft have recently added to their hosted-feature inventory.  Google doesn’t play up the Lambda or functional programming angle; they focus instead on the more popular microservice concept.  A Cloud Function is a simple atomic program that runs on demand wherever it’s useful.

Episodic usage isn’t exactly the norm for business applications, and Google makes it clear (as do Amazon and Microsoft) that the sweet spot for functional cloud is event processing.  When an event happens, the associated Cloud Functions can be run and you get charged for that.  When there’s no event, there’s no charge.

There are a lot of things Google could focus a new competitive drive on, and making Cloud Functions a key element of that drive says a lot about what Google believes will be the future of cloud computing.  That future, I think, could well be built on a model of computing that’s a variant on the popular web-front-end approach now used by most enterprises.  We could call it the “event-front-end” model.

Web front-end application models take the data processing or back-end elements and host them in the data center as usual.  The front-end part, the thing that shows screens and gives users their GUI, is hosted in the cloud as a series of web servers.  Enterprises are generally comfortable with this approach, and while you may not hear a lot about this, the truth is that most enterprise cloud computing commitments are built on these web front-ends.

It seems clear that Amazon, Google, and Microsoft all see the event space as the big driver for enterprise cloud expansion beyond the web front-end model.  The notion of an event front-end is similar in that both events and user GUI needs are external units of work that require an intermediary level of functionality, before they get committed to core business applications.  You don’t want your order entry system serving web pages, only processing orders.  Similarly, an event-driven system is likely to have to do something quickly to address the event, then hand off some work to the traditional application processes.

I doubt that even Google, certainly geeky enough for all practical purposes, think that microservice programming or Lambda programming or any other programming technique is going to suddenly sweep the enterprise into being a consumer of Cloud Functions.  I don’t think they believe that there’s a runaway revenue opportunity converting web front-ends to Cloud Functions either (though obviously user-generated HTTP interactions can be characterized as “events”).  What is happening to drive this is a realization that there’s a big onrushing trend that has been totally misunderstood, and whose realization will drive a lot of cloud computing revenue.  That trend is IoT.

The notion that IoT is just about putting a bunch of sensors and controllers on the Internet is (as I’ve said many times) transcendentally stupid even for our hype-driven, insight-starved, market.  What all technology advances for IT are about is reaping some business benefit, which means processing business tasks more effectively.  Computing has moved through stages in supporting productivity gains (three past ones, to be exact) and in each the result was moving computing closer to the worker.  Moving computing to process business events moves computing not only close to workers, but in many cases moves it ahead of them.  You don’t wait for a request from a worker to do something, you do it in response to the event stimulus that would have (in the past) triggered worker intervention.  Think of it as “functional robotics”; you don’t build robots to displace humans, you simply replace them as the key element in event processing.

This approach, if taken, would offer cloud providers a chance to get themselves into the driver’s seat on the next wave of productivity enhancement, an activity that would generate incremental business benefits (improved productivity) and thus generate new IT spending rather than displacing existing spending.  That would be an easier sell—politically, because there’s no IT pushback caused by loss of influence or jobs, and financially because unlocking improved business operations has more long-term financial value than cutting spending for a year or so.

Event processing demands edge hosting.  Functional programming is most effective as an edge-computing tool, because the closer you get to the edge of the network in any event-driven system, the sparser the events to process are likely to be.  You can’t plan a VM everywhere you think you might eventually find an event.  Amazon recognized that with Greengrass, a way of pushing the function hosting outside the cloud.  I think Google recognizes it too, but remember that Google has edge cache points already and could readily develop more.  I think Google’s cloud will be more distributed than either Amazon’s or Microsoft’s, because Google has designed their network from the first to support edge-distributed functionality.  Its competitors focused on central economies of scale.

The functional/event dynamic is what should be motivating the network operators.  Telcos have a lot of edge real estate to play with in hosting stuff.  The trick has been getting something going that would (in the minds of the CFOs) justify the decision to start building out.  The traditional approach has been that things like NFV would generate the necessary stimulus.  It didn’t develop fast enough or in the right way.  We then have 5G somehow doing the job, but there is really no clear broad edge-hosting mandate in 5G as it exists, and in any case we could well be five years away from meaningful specs in that area.

Amazon, Google, and Microsoft think that edge-hosting of functions for event processing is already worth going after.  Probably they see IoT as the driver.  Operators like IoT, but for the short-sighted reason that they think (largely incorrectly) that it’s going to generate zillions of new customers by making machines into 4/5G consumers.  They should like it for carrier cloud, and what we’re seeing from Google is a clear warning sign that operators are inviting another wave of disintermediation by being so passive on the event opportunity.

Passivity might seem to be in order, if all the big cloud giants are already pushing Lambdas.  Despite the interest from them, all the challenges of event processing through functions/microservices/Lambdas have not been resolved.  Stateless processes are fine, but events are only half the picture of event handling, and the states in state/event descriptions show that the other half isn’t within the realm of the functional processes themselves.  We need to somehow bring states, bring context, to event-handling and that’s something that operators (and the vendors who support them) could still do right, and first.

State/event processing is a long-standing way of making sense out of a sequence of events that can’t be properly interpreted without context.  If you just disabled something, sensors that record its state could be expected to report a problem.  If you’re expecting that something to control a critical process, then having it report a problem is definitely not a good thing.  Same event, different reactions, depending on context.  Since Lambdas are stateless, they can’t be the thing that maintains state.  What does?  This is the big, perhaps paramount, question for event processing in the future.  We need to be able to have distributed state/event processing if we expect to distribute Lambdas to the edge.

I didn’t exaggerate the importance of the Lambda-and-event paradigm in my past blogs.  I’m not exaggerating it now, and I think Google just proved that.  There aren’t going to be any more opportunities for operators to reap IoT and edge-hosting benefits once the current one passes.  This is evolution in action—a shift from predicable workflows to dynamic event-driven systems, and from a connecting economy to a hosting economy.  Evolution doesn’t back up, and both operators and vendors need to remember that.