It’s Time We Got Serious About the IoT Model

No matter how stupid a notion is, it’s never too stupid to be hyped.  The corollary point is that proof, even successive waves of further proof, of a notion’s stupidity won’t drive the hype away.  We’re seeing proof of that with the recent DDoS attack, launched from what would be called IoT.  You all know I’ve been saying our “everything-directly-on-the-Internet” vision of IoT is dumb.  Well, guess what?

There is no question that smart devices are exploding.  Nearly everything can be made to communicate these days.  There’s also no question that if we could gather the collective information from our smart devices, and perhaps “public smart devices” we could enrich the information context in which we operate.  This enrichment could lead to mobile applications that can sense what we’re trying to do, where we’re trying to go, and to better integration in the movement of goods in the real world.

The question, then, isn’t whether networking smart devices could be world-shifting, but whether the “Internet” of Things is the right approach.  It’s possible that because we think of the Internet as the soul of our social world, we automatically think of it as the thing-network.  It’s possible that VCs and OTT players would love to see somebody like the telcos fund a vast deployment of sensors that they could exploit, like it was Internet connectivity.  Anything is possible, except for the impossible, and that’s what we’re finding out.

There’s always been three fundamental issues with IoT.  The first is cost; you have to have a revenue flow to all those who have to spend to make it happen, and that flow has to represent a reasonable return on their investment.  Issue number two is privacy; you have to be sure that you don’t compromise privacy or even safety by giving people access to information they shouldn’t have, or could misuse perhaps by combining it with other information.  Issue number three is security; you have to be sure that the “things” on IoT cannot be hacked or spoofed.

It should be clear that there’s a natural tension between the first of these points and the other two.  The more protection against misuse you build into something, the more expensive you make it.  Add this to the fact that something that’s directly connected to the Internet has to support a more complicated network paradigm than something on a local control protocol like ZigBee, and you quickly realize that you’ll either have to limit opportunity by building a robust “thing” or limit privacy/security.  That’s what has happened here.

If we expect a “thing” to have the same kind of in-device security and privacy protection as a smartphone or computer, then we’re asking for something a lot more expensive than the market would likely bear, at least for broad adoption.  If we want mass-market cheapness, then we’d have to deal with security and privacy a different way.  The most logical one is to use an intermediary element as a gateway or proxy, representing the device on the Internet.  If that’s done, then the “thing” itself could be invisible on the Internet and online interaction with it could be mediated by something that’s used once per installation, covering potentially a lot of our “things”.  It would thus present a lower total cost of ownership.

In my view, all our three issues argue strongly against the notion of direct Internet connection of “things”, but that doesn’t mean that adopting my proxy model solves all the problems.  Obviously, you have to secure the proxy, and in two directions.  Most effective, accepted, home-control systems today (which are not Internet-based but may offer a proxy connection) have both a device-registration function to keep unwanted devices off the network, and a proxy-protection function to keep the home control network secure.

Accepted device-registration strategies will often include either a specific discovery approach that’s separate from functional acceptance, or an explicit registration process that requires you physically manipulate the device in some way to make it receptive.  Often both are included, and if the process is well-designed then it’s unlikely that unwanted “things” will invade you.

Even that’s not the full story.  One of the knotty problems with IoT is how you’d update the software/firmware to avoid having your “things” turn into bricks because of some logic problem.  An update path that goes directly from a vendor website to a “thing” presents a major risk, even if updates have to be explicitly authorized.  By spoofing the vendor website, it would be possible to introduce a malware-infected update into the “thing”.

Malware in the “things” is most risky where WiFi is used by the “thing”, even if it’s only to contact its proxy.  If the “thing” has an IP address on the home/business LAN then it could contact the outside, including to generate a DNS query.  If the “thing” has no IP address then it can’t send anything beyond its network without going through the proxy, which can be more heavily defended.

Another caveat, though.  I used to give technical seminars, and on one of my slides I had the advice “There’s No Substitute for Knowing What You’re Doing.”  IoT security, even in a proxy framework where the “things” have no IP address, can still be contaminated by people who fail to take the most basic precautions.  The classic example is not changing the default password on something or not changing their own passwords.

The password point is particularly relevant given that at least some of the IoT bots involved in the latest DDoS attack were from a single vendor who says that the big problem is customer password practices.  We still have servers today that are vulnerable because somebody forgot to delete a default account, an old account, or used a simple, static, obvious password.  This where IT professionals are involved.  In the consumer space, there’s little hope of getting people to do the right thing.

That raises the question of whether, in a “thing-connected” world, we don’t need a better form of protection.  Retinal scans, fingerprints, and other physical-linked authentication steps are overall more secure because they don’t rely on good behavior on the part of the device owners.  Unless we like DDoS attacks, it might be wise to think about requiring stuff like this for IoT deployments before we get too far to have a hope of catching up with the issues.

IoT is a great notion if it’s done right, and we now have clear evidence that it’s potentially a disaster if it’s done wrong.