I said in a comment on an earlier blog that I thought all the IoT approaches touted so far were irrational. In earlier blogs I’ve noted my view that IoT had to be viewed more as a big-data application than as a network. A few of you have asked me to expand on my own view of IoT, and so that’s what I propose to do here.
“Classic” IoT is a vague model for device connection to the Internet whereby sensors and controllers of various kinds are directly connected to the Internet. Once there, they’re available to fuel a whole series of new applications. For proponents of this vision, the question is how we support LTE or WiFi interfaces to all these gadgets. There are a lot of issues associated with this model, from a public policy perspective, from the perspective of ROI on IoT applications, and from a simple technology-ecosystem perspective. We have to start an IoT discussion by addressing these issues.
In policy terms, it’s clear that just putting a bunch of sensors and controllers on the Internet would create a massive challenge in security and privacy. Imagine how much fun hackers would have diddling with the traffic lights in New York, shutting down lanes on bridges and tunnels, and perhaps even impacting pipeline valves and the power grid. Imagine how much easier it would be for stalkers (or worse) to track prospective victims by looking at the security and traffic cameras. Happy fishbowl, everyone. Obviously there’s no chance this could be allowed.
Perhaps, then, it’s fortunate that it’s far from clear who’d have an interest in deploying this stuff. State and local governments in many areas have found they can’t even get permission or funding to set up traffic cameras. Public utilities already have sensor and controller connectivity, but it’s shielded from the very open environment the IoT proposes to foster and they’d hardly be looking at magnifying their vulnerability. Private companies would look at the IoT model and ask how they could possibly earn a return by just publishing data or allowing control openly.
The technical challenges fall into two groups, one relating to both policy and ROI and the other to utility. On the policy/ROI side, the problem is that the more sophisticated you make a sensor or controller the more it costs and the more power it will need. If you have a home security system you probably use inexpensive wired sensors for your doors and windows, and maybe for motion detection. These probably cost about twenty bucks a pop. Imagine an IoT world, where each of these sensors is online through WiFi or LTE, and each is equipped with a firewall and network-based DDoS protection to prevent attack. You’re probably looking at five times the cost, plus you’ll either have to power the stuff or change batteries a lot.
The utility issue arises from the fact that a given sensor is just an IP address in the classic IoT model. How is that sensor put into a useful context? For example, if it’s a traffic sensor, what road and milepost is it located at, and what format is its information in? Is it counting cars, measuring speeds, or both? How do we know it’s even what it purports to be? It might be spoofed by some hacker or presented by an enterprising rest stop owner who wants to divert traffic by making an expressway look jammed with traffic.
In my opinion, IoT isn’t a movement at the network level, but rather an architecture built around a big-data model. Imagine a database where information from known and authenticated contributors is collected and structured. The contributions could include traffic sensors, home sensors, even locations of mobile devices. All the data would be contributed based on policy-defined limits on use. Those who wanted to use the data would do a big-data query that would be policy-validated to insure it meets security and privacy rules, and would receive what they needed—historically or in real time. Control elements would be represented by write-enabled variables, and accessing them would also be policy controlled.
Where’s the network? Behind the database. Any owner of sensors could contribute information into the big-data repository, but they would control the contribution and be able to state policies on how their data could be used. The “network” connecting their sensors could be anything that’s convenient, meaning that all of the current sensors and controllers that are networked using any protocol could be admitted to the IoT repository through a gateway. No need to make sensors directly visible online, or to change sensor technology to support direct Internet visibility.
This sort of IoT could be visualized as a collection of “device subnets” that would use any suitable technology to attach sensors and controllers. Each would have a gateway through which the data was pumped into the IoT repository, and the gateway would manage the policies and formatting. The IoT repository would be an online database query service—a web service. It might be linked onto a company VPN, to a cloud application set, or made available on the Internet.
You can probably see the similarities between this model and the web. Anyone can put up a website; anyone could “put up” a device subnet directly, or contribute to one of any number of IoT repositories subject to their policies. Anyone could access what’s put up, subject to whatever policy limits the owner imposes. The commercial terms of any of these relationships could be whatever the market sets.
IMHO, it would be the IoT repositories that would establish the value of the whole picture. Any cloud provider could establish one, of course, including Amazon, Apple, Google, and Microsoft. Interconnect players like Equinix could build them, and network operators could as well. For some of the players like Amazon, Apple and Google, you could see their repository exploiting the mobile devices they offer (directly or as a platform). Auto manufacturers could join somebody’s repository or start their own. Same with home security companies, federal, state, and local government, and even public utilities.
What about standards? Well, if we presume the IoT Repository model, and if we presume that we’re accessing primarily those devices with large installed populations, standards shouldn’t really be much of an issue. A query can format data in any way that’s convenient, unlike an interface.
This model is also easily federated. We have hotel and airline sites today, and discount travel sites that create front-ends to their models, and even a couple who front-end the front-ends. We could build gateways between IoT repositories, high-level repositories that culled specialized data from others, or did specialized analytics. Think cottage industry.
One of the most interesting points about this model is that it raises what might be called the “utility IoT” approach. A company deploys a bunch of sensors and controllers and pays for the effort by 1) contributing the data to repositories and/or 2) developing and deploying their own IoT repository where they charge for access. Doing this would be easier for telcos and public utilities who have historically low internal rates of return and tolerance for high first costs, but in theory any player could bootstrap into it.
This isn’t classic IoT, it’s not a universe where new OTTs mine sensors that somehow magically appear, magically create ROI, and magically generate traffic and equipment revenues. It’s somewhere I think we could get to, and that seems a better approach to me.