How can we best accommodate the notion of virtualization to the application of IoT? That’s a question that more and more operators and vendors are wrestling with, and it’s a good one. The answer might be interesting and disruptive—think less about virtualizing the network and more about virtualizing the “things”, the sensors and controllers.
I’m not denying that there are “things” that we might want to access that aren’t currently connected. I’m not denying that 4G/5G might be a useful way to connect some of them, but I think everything we already know about security and environmental monitoring and process automation proves that connection isn’t really the issue. You’re probably sitting in the midst a “thing network” as you read this blog, and it’s based on pedestrian sensor/controller technology that doesn’t put any of the “things” directly on the Internet, or on a 4G network either.
So is the whole IoT thing a colossal media/analyst fraud? Maybe, but there is still a grain of value in the notion if you look beyond the aspirations of vendors and operators for easy money. The question is how to empower the market in general with the knowledge of what are now (and are likely to remain) “private things” that are neither online nor accessible in any form to general application development.
I’ve talked about one model that could harness the things, so to speak. If we were to build a massive set of repositories that held the collected knowledge we extract from our things, we could then run queries/analytics that would let people exploit all this data and still (through the analytics apps) apply the necessary protections to insure stability, security, and privacy. In this approach, IoT is a database presented by a series of APIs. I think this is a good approach, certainly better than just sticking all these sensors and controllers directly on the Internet and hoping everyone would behave.
There’s another approach, though. Suppose that we want to preserve the literal “Internet of Things” model but recognize that everything that’s on the Internet isn’t necessarily directly and discretely connected. We could then employ virtualization to create a series of “virtual things” that are constructed from, related to, the real things. These could be presented on the Internet through traditional web-like APIs, but the real stuff that supports the virtual presence could be hidden, connected as it is now, and the APIs could still apply policy controls to protect the integrity of the data and the security of the users.
With this model, each “thing” is represented as though it were a kind of website; you could read and write to it and potentially even access it through a web browser. Like any “website” it could be either on a VPN or on the open Internet, and it could apply encryption and access controls. In programming terms, it’s a resource accessed with a RESTful API.
Behind each “thingsite” is a process that links it to the real sensor or controller, or set thereof. This process is similar to that used behind websites to link to transactional applications. In theory it could operate asynchronously, gathering data and posting it to the thingsite based on policy-determined timing, or it could be triggered by an inquiry to the thingsite. The process could also be doing database dips, meaning that the thingspace could be a front-end for the repository of thingdata I’ve been talking about.
This model of IoT would preserve the notion of a set of on-the-web sensors and controllers that could be exploited, but they’d buffer the idea with the same kind of virtualization that currently keeps tenant networks separate. If your company wants to expose a set of sensors/controllers to partners, you simply define a thingspace for them, and let the back-end technology populate it with the information you’re willing to share. They can do whatever they want with the exposed things, and you don’t have to coordinate with them as long as you’re happy with the data that’s being shared.
“Public” things, meaning things that would be available for use without contractual arrangements, are also possible with the model; you simply expose a thingspace directly online and you apply only the policy filters that are required to conform to evolving privacy regulations. In theory you could even build in security and load-balancing with this model, spawning multiple virtual things that represent the same set of real ones to share the load of mass access.
Since the back-end applications that feed the thingsites would be able to gateway data from a private sensor/controller network based on any technology, you can immediately harness all the stuff that’s already deployed, or at least that part of the current base that its owners are prepared to open up. You could also construct, with the proper access to either things or thingsites deployed elsewhere, your own “derived thingsites” that represent analytics-based digestions of one or more sensors, or that introduce data from stuff outside the thingspaces—like retail pricing or personal presence.
What about the original sensors-online model? Well, if you wanted you could augment virtual things with real ones, but I think that eventually somebody is going to get smart and realize that the cost of supporting a complete online presence with policy and security filters for every “thing” is going to kill the opportunity completely. A better approach would be to have the real things, even new ones, front-ended by virtual thingsites that could handle all the variables of security, policy, and performance.
So will this approach rise up and take over? Probably not, because so much of technology these days is about creating buzz rather than creating opportunity. What could happen, and I think will happen eventually, is that the real IoT opportunities will end up migrating to a practical platform, which could be the thingspace concept or the analytics model. Somebody who manages to figure this out up front could end up making some big bucks.