Google Enters the Cloud IoT Space–Tentatively

Google has now followed Amazon and Microsoft (Azure) in deploying cloud tools for IoT.  In many ways, the Google announcement is a disappointment to me, because it doubles down on the fundamental mistake of thinking “IoT” is just about getting “things” on the “Internet.”  But if you look at the trend in what I call “foundation services” from the various cloud providers, we might be sneaking up on a useful solution.

IoT is at the intersection of two waves.  One, the obvious one, is the hype wave around the notion that the future belongs to hosts of sensors and controllers placed directly on the Internet and driving a whole new OTT industry to come up with exploitable value.  The other, the more important one, is the trend to add web services to IaaS cloud computing to build what’s in effect a composable PaaS that can let developers build cloud-specific applications.  These are what I’ve called “foundation services”.

Cloud providers like Amazon, Microsoft, and (now) Google have bought into both waves.  You can get a couple dozen foundation services from each of the big IaaS players, and these include the same kind of “device-management-pedestrian” solutions for IoT.  Network operators like Verizon who have IoT developer programs have focused on that same point.  The reason I’m so scornful about this approach is that you don’t need to manage vast hordes of Internet-connected public sensors unless you can convince somebody to deploy them.  That would demand a pretty significant revenue stream, which is difficult to harmonize with the view that all these sensors are free for anyone to exploit.

The interesting thing is that for the cloud providers, a device-centric IoT story could be combined with other foundation services to build a really sensible cloud IoT model.  The providers don’t talk about this, but the capability is expanding literally every year, and at some point it could reach a critical mass that could drive an effective IoT story.

If you look at IoT applications, they fall into two broad categories—process control and analytic.  Process control IoT is intended to use sensor data to guide real-time activity, and analytic IoT drives applications that don’t require real-time data.  You can see a simple vehicular example of the difference in self-drive cars (real-time) versus best-route-finding (analytic) as applications of IoT.

What’s nice about this example is that the same sensors (traffic sensors) might be used to support both types of applications.  In a simplistic view of IoT, you might imagine the two applications each hitting sensors for data, but remember that there could be millions of vehicles and thus millions of hits per second.  It would never work.  What you need to assume is that sensor data would be “incoming” at some short interval and fuel both applications in an aggregate way, and each app would then trigger all the user processes that needed the information.

This kind of model is supported by cloud providers, not in the form of what they’d call IoT, but services like Amazon’s Kinesis can be used to pass sensor information through complex event processing and analysis, or to spawn other streams that represent individual applications or needs.  You can then combine this with something like Amazon’s Data Pipeline to create complex work/storage/process flows.  The same sort of thing is available in Azure.

You could call the foundation services here “first-level” foundation services in that they are basic functions, not specific to an application or even application model.  You can also easily imagine that Microsoft and Amazon could take these first-level services and build them into a second-level set.  For example, they could define a set of collector processes that would be linked to registering devices, and then link the flows of these collectors with both real-time correlation and analytic storage and big data.  There would be API “hooks” here to allow users to introduce the processing they want to invoke in each of the areas.

These second-level services could also be made into third-level services.  Traffic analysis for route optimization is an example; a GPS app could go to such a service to get traffic conditions and travel times for a very large area, and self-drive controllers could get local real-time information for what could be visualized as a “heads-up” display/analysis of nearby things and how they’re moving.

The emergence of an OTT IoT business actually depends more on these services than on sensor management.  As I’ve already noted, you can’t have individual developers all building applications that would go out and hit public sensors; there’s no sensor technology short of a supercomputer that could handle the processing, and you’d need a gigabit trunk to the sensor to carry the traffic.  The reality is that we have to digest information from sensors in different ways to make the application practical and control sensor costs.

Why are we not seeing something logical here, then?  Why would Google be doing something that falls short of the mark, utility-wise?  The likely answer lies in how technology markets evolve.  We hear about something new, and we want to read or hear more.  That creates a media market that is far ahead of any realization—how far depends on the cost of adoption and the level to which an early credible business case can be defined.  During the media-market period, what’s important is whether an announcement gets press attention, and that relies most often on the announcement tracking the most popular trends, whether they’re likely to be realized or not.  We’ve seen this with NFV, with the cloud, and with most everything else.

Eventually, though, reality is what’s real.  You can only hype something till it’s clear that nothing useful will ever happen, or until the course the technology will really take becomes clear and shouts out the hype.  We’re already getting to that point in NFV and the cloud, and we’ll get there with IoT as well.

Speaking of IoT, and of real-time processing and workflows, all of this stuff is going to end up shaping NFV as well.  IMHO, there is no arguing with the point that NFV success has to come in the form of NFV as an application of carrier cloud.  Carrier cloud is a subset of cloud.  Right now we have an NFV standardization process that’s not really facing that particular truth.  IoT and real-time control are also applications of “carrier cloud” in the sense that they probably demand distributed cloud processing and mass sensor deployment that operators would likely have to play a big role in.  If a real-time application set drives distributed cloud feature evolution, then that could build a framework for software deployment and lifecycle management that would be more useful than NFV-specific stuff would be.

I also believe that operator architectures like AT&T’s or Verizon’s are moving toward a carrier-cloud deployment more than a specific deployment of NFV.  If these architectures succeed quickly, then they’ll outpace the evolution of the formal NFV specifications (which in any event are much more narrow) and will then drive the market.  Operators have an opportunity, with carrier cloud, to gain edge-cloud or “fog computing” supremacy, since it’s unlikely Amazon, Google, or Microsoft would deploy as far as a central office.  If, of course, the operators take some action.

They might.  If Amazon and Microsoft and Google are really starting to assemble the pieces of a realistic IoT cloud framework, it’s the biggest news in all the cloud market—and in network transformation as well.  Operators who don’t want to be disintermediated yet again will have to think seriously about how to respond, and they already admit that OTTs are faster to respond to market opportunities than they are.  It would be ironic if the operators were beat by the OTTs in deploying modernized examples of the very technologies that are designed to make operators more responsive to markets!  IoT could be their last chance to get on top (literally!) of a new technology wave.