The Shape of the Future Robot

There’s no question that AI is important. There shouldn’t be a question that robots is also important, but Amazon’s long interest in robotics, it’s Astro proto-robot, its desire to acquire iRobot, and the rumors I’ve heard that Google, Microsoft, and Meta are all looking at robots should be proof enough.

Amazon’s Astro and the rumors I’ve heard about the other three vendors’ programs suggest that the majority of home-robot interest focuses on a device rather than on what most of us would call a “robot”, which is something anthropomorphic. In fact, the majority of technical people I know would define the ultimate robot as the marriage of AI technology for smarts, and a humanoid form.

The marriage of these two concepts, which is what Tesla proposes to do with Optimus, a humanoid robot, could be arresting, but as usual the qualifier here is the critical point. The original Optimus unveiling showed a robot that one of my friends who saw it described as a “patient recovering from a paralyzing injury”. Elon Musk said that Optimus would learn to walk and to behave with a lot of autonomy, but maybe three to five years out. Still, the mere possibility that we could actually have humanoid robots raises a lot of hopes, and a lot of hackles too.

A price tag of twenty thousand dollars means Optimus isn’t going to be a fixture in every household, at least not immediately. However, that’s less than the cost of the average car and far less than a Tesla, and people still buy them. Could we actually see millions of Optimi (I guess we have to figure out what the plural of “Optimus” would be) out there? If we did, what would the risk/reward balance look like? It depends on the degree of autonomy that Musk can actually achieve, and what others (perhaps Amazon) might do in response to Tesla’s moves.

We are a very long way from being able to create a robot that could actually match human behavioral and functional standards. However, as anyone who’s encountered animals in the wild knows, you don’t have to be human to be dangerous. A chess-playing robot, as I said in a prior blog, injured a boy it was playing against, and it clearly wasn’t even attempting full autonomy. A humanoid, autonomous, robot might have a behavioral range that could include something that closely resembled human hostility. Asimov’s Three Laws of Robotics might end up coming into play after all.

Those three laws, summarized, say that a robot cannot harm a human or allow them to come to harm, must obey human commands subordinate only to the First Law, and must preserve themselves subject to the First and Second. While these surely sound worthy goals, even a moment’s thought should demonstrate that there is a presumption that Asimov’s robots were truly humanoid in “thinking”, since even interpreting what those laws mean and what violation would look like is an exercise in human judgment. If you’re a lousy driver, would your household robot be justified in holding you prisoner to keep you safe? No skiing or scuba either.

The big problem, of course, is that early robots wouldn’t be truly humanoid and couldn’t hope to apply these laws. Musk wants to test his robots by having them work in his factories, and that means that they’d have a wide functional range even though they wouldn’t be able to understand who Asimov was or what his Laws of Robotics mean. How does such a robot learn not to sit a crate down on a human co-worker, or hit one with a 2×12? The challenge Musk faces is that when early robots work in the real world, they have to obey real-world rules without human thinking. We don’t have to be told not to hit someone with a heavy plank or sit a crate on them, but what about our factory robot? The truth is that the number of behavioral rules such a robot would have to obey to be safe and functional would be a major test of AI in itself, and what happens if somebody forgets to tell Robbie the Robot that an I-beam is as lethal as a 2×12?

We have self-drive vehicles, though. John Deere says it’s looking to have fully autonomous farm production vehicles by 2030. It seems logical that we could create a humanoid robot, right? Not so fast. An autonomous vehicle is a much easier nut to crack than an autonomous fully-humanoid robot, because the range of functional behaviors and the number of relevant stimuli are both limited. The same limitations mean that there’s no value to making a vehicle look human; there are other form factors that better suit the mission. Amazon and other Tesla competitors (or potential competitors) in the robotics space have apparently decided that the best strategy is to create a specialized autonomous device for the home that, like an autonomous car or harvester, isn’t expected to do everything people can do and thus doesn’t have to look and act like them.

But that’s not visionary, or even responsive to the broad view of what a robot should be and do. Tesla apparently wants a big jump, but if Musk wants a truly humanoid (human-looking) robot, it follows that it has to have a much broader set of functional behaviors and stimulus sensors than a car, or it can’t act in a way appropriate to its appearance. People expect something that looks like C3PO to behave like that Star Wars character.

If Optimus works in a factory as a humanoid, it’s going to have to take on jobs people could do, and Musk clearly expects that. That’s going to demand something that does a lot more than wave and walk, or it will end up reducing factory productivity rather than increasing it. Human workers who have to dodge 2x12s and I-beams and crates don’t get much work done. Thus, Musk’s goal demands that he somehow address the question of how to address the wide range of things people do, and know to avoid, and I think that’s something he’s underestimating.

I also think that it’s clear that a near-term autonomous humanoid robot would have to be supported by an “out-of-body” AI agent process. In other words, the robot would not have an internal brain, or at least would have only minimal locally hosted functionality. The remainder would come from elsewhere, presumably close enough that the latency associated with reaction to events wouldn’t be an issue. Human reaction time is significant in machine terms, so the control latency of this configuration wouldn’t be an issue.

This approach raises the question of whether Optimus might be working with other Optimi rather than with humans, and whether the central intelligence would then be controlling the entire robotforce. That would significantly reduce the burden of creating a robot smart enough to safely and functionally interact with complex humans in complex situations. But would a “factory robot” need to look like a human? Wouldn’t it be more logical to have specialized robots for specific factory tasks, and in fact for things like the home and garden? We have robots that can pick fruit already, and nobody expects to talk with one of these or have it walk the dog.

There’s a fine line between vision and delusion, and a lot of people think Elon Musk has crossed that line a number of times, but he’s also made good on some pretty astonishing promises. Can he make C3PO real? Maybe he can, but I don’t think it’s going to happen as quickly as we might hope, or as he might believe. Still, I sure hope he makes it!