Is Amazon Going to Give Us Real Robots?

Amazon’s decision to buy iRobot, the company that makes the popular Roomba robotic vacuum, has stirred a lot of speculation, ranging from some solid business questions to the expected speculation that this is just the latest round in the effort to have robots take over the world. Are robots poised to overtake AI as the center of our fears of sentient technology, or (gasp!) might they mate and create something truly awful?

Most of you have probably seen commercials that linked Alexa command capability to Roomba technology. Those are the link to the most rational explanation for the deal. The “smart home” is getting smarter not only in terms of what specific gadgets give it smarts, but in its ability to interact with us and accept instructions. Think “the Swarm” and you get an idea of why some find this a bit disquieting, and there are surely risks, but also benefits.

Not the least being financial. Companies like Amazon recognize the classic “whole is greater than the sum of its parts” rule applies to smart homes and other facilities particularly well. The value of Alexa is limited if all you can do is turn on a light or play some music. Add to the capabilities and you add to the value, and the combined value grows at an even faster rate. Imagine an intruder, detected by Ring security and confronted by a charging Roomba? Well, maybe we need to think deeper than that.

Household robotics have an interesting potential. Vacuums were actually a bit of a low apple, though; the devices don’t need “intelligence” as much as what in animals we’d call synapses. If you’re about to run into something, stop or turn to avoid. They can rely on random movement to cover an area, but they could also be made to work with a simple map, either pre-installed or derived from past movement. If we want home robots to go somewhere, we have to advance what they do, and how they understand the facility where they’re working.

You could advance from vacuuming to a broader cleaning mission, which might mean extending current device capabilities to recognize different floor types and adapting to the current one, to perhaps even mopping/floor dusting. You could include a broader dusting mission for a taller, true, robot, providing that you could “teach” a robot to avoid knocking things over. You could make a robot bring you something if you could add object identification. All these applications demonstrate that there are three elements to a broader robotic device—mechanical manipulation and movement (M3), situational awareness, and mission awareness. All three would require augmentation from the state-of-the-vacuum baseline, and that creates a number of questions and challenges.

One question is how to distribute the intelligence. I’ve done a little work in robotics, and my conclusion was that the M3 features had to be largely in-device. First and foremost, a robot has to be able to avoid doing something destructive simply because it’s moving. This is also a fair robotic analog to animal/human reflexes. You don’t think about dodging something thrown at you, you simply do it. But situational awareness and mission awareness are things that could well be ceded to some higher-level element.

Amazon, with Ring security devices, has some of that higher-level stuff already. It makes sense to assume that a smart robotic home would have a home controller, and that the controller would likely be an outgrowth of current smart-home-control elements. Not only would that leverage current technology, it would also facilitate cross-actions of the type we are already used to in home control. Your speakers can control your lights directly, or they can control a home controller. In either case, you’re using cloud-resident voice recognition and command generation.

One technology shift that seems critically important here is the transition from “ordinary” sensors that simply detect objects (ultrasonics/radar) and ones that can actually analyze a visual field and interpret it. Amazon already does this on its Ring devices, which can pick out humans with a fairly high level of reliability, and Google Lens can identify a broad range of things. My presumption is that Amazon will likely advance robotics in this area first, adding video analysis to robotic devices by using augmented Ring technology. That would let a robot map a space in three dimensions, identify pets and people, and even specific people.

The big question is less one of technology direction than of near-term application. Moving from a robot vacuum to anything else is a major step. To expand cleaning duties meaningfully you’d need to give a robot the ability to move up and down at least a few stairs. To do dusting or fetch something, the robot would have to be a lot taller than a Roomba. Any of this means that the consumer price would surely rise significantly, which would reduce the addressable market.

Of course, Amazon may decide to take robots more in a business direction. Its own warehouses and shipping would be a fairly logical place to use enhanced robotics, and they could leverage their own experience into sales to other businesses, and eventually move things over to the consumer space when costs could be kept manageable. But what is the value of acquiring Roomba then?

The big barrier to all of this is safety. Remember the chess-playing robot that broke a boy’s finger? It’s doubtful that Amazon or any US company would release a product that could make that mistake, but one thing the incident shows is that quick movement by a person, particularly one that doesn’t fit an expected pattern, could throw off robot intelligence. The old Three Laws of Robotics may be logical and seem to cover all the bases, but they presuppose robots with almost-or-completely-human intelligence, which early devices surely will not have.

The “robot” in the chess match was really an industrial robotic arm, and behind it was an AI/computer-based chess program. Some stories on the chess-robot incident linked the technology with the use of AI to “play” chess. The difference is that AI chess doesn’t attempt to link the computer directly to moving pieces on the board. When AI or robotic technology crosses over to manipulate things in the real world, there are going to be issues with how those manipulations can be prevented from damaging property or hurting people or animals. Teaching AI to play chess doesn’t teach it to safely move pieces around when the opponent is one of those disorderly biological systems we call a “human”.

It really goes back to M3 and the other two of our three elements. AI chess focuses on mission awareness. Mechanical manipulation and movement have to focus on the interplay between a robot element and the real world, and it’s situational awareness that has to provide the break on how M3 and mission awareness combine. That’s what is going to have to be transformed if robots are going to do more than crawl around our floors, and Amazon may be signaling they intend to help robots to their feet and integrate them into our lives. Otherwise, the deal makes little sense.