How Human is AI?

A Google employee raised a lot of ire by suggesting that AI could have a soul. That question is way out of my job description, but not so with questions that might lead up to it. Are AI elements “sentient” today? Are they “conscious” or “self-aware?” At least one researcher claims to have created a self-aware AI entity.

This topic is setting itself up to be what might be one of the most successful click-baits of our time, but it’s not entirely an exercise in hype-building or ad-serving. There are surely both ethical and practical consequences associated with whatever answer we give to those questions, and while some discussion is helpful, hype surely isn’t.

One obvious corollary question is “how do we define” whatever property we’re trying to validate for an AI system. What is “sentient” or “self-aware”? We’ve actually been arguing for at least a century on the question of biological sentience or self-awareness. Even religions aren’t taking a single tack on the issue; some confine self-awareness to humans and others admit at least indirectly to the notion that at least some animals may qualify. Science seems to accept that view.

Another corollary question is “why do we care?” Again, I propose to comment only on the technical aspects of that one, and the obvious reason we might care is that if smart technology can’t be relied upon to do what we want it because it “thinks” there’s something else it should be doing, then we can’t rely on it. Even if it doesn’t go rogue on us like Hal in “2001”, nobody wants to argue with their AI over job satisfaction and benefits. Is there a point in AI evolution where that might be a risk? A chess robot just broke a girl’s finger during a match, after all. Let’s try to be objective.

Technically, “sentient” means “capable of perceiving things” or “responsive to sensory inputs”. That’s not helpful since you could say that your personal assistant technology is responsive to hearing your voice, and that a video doorbell that can distinguish between people and animals is responsive to sight. Even if we were to say that “sentient” had to mean that perceiving or being responsive meant “capable of reacting to” doesn’t do us much good. Almost everything that interprets a real-world condition that human senses can react to or create could be considered “sentient”. And of course, any biological organism with senses becomes sentient.

“Conscious” means “aware of”, which implies that we need to define what awareness would mean. Is a dog “conscious”? We sort-of-admit it is, because we would say that we could render a dog “unconscious” using the same drug that would render a human unconscious, which implies there’s a common behavioral state of “consciousness” that we can suppress. Many would say that an “animal” is conscious but not a plant, and most would agree that in order to be “conscious” you need to have a brain. But while brains make you aware, do they make you self-aware?

We can do a bit better with defining self-awareness, at least with animals. Classic tests for self-awareness focus on the ability of an animal to associate a mirror image of itself with “itself”. Paint half of a spider monkey’s face white and show it a mirror, and it will think it’s another monkey. Paint some of the great apes the same way, and they’ll touch their face. “That is me” implies a sense of me-ness. But we could program a robot to recognize its own image, and even to test a mirror image to decide if it’s “me” through a series of movements or a search for unique characteristics. Would that robot be self-aware?

One basic truth is that AI/robots don’t have to be self-aware or sentient to do damage. It’s doubtful that anyone believes that chess robot was aware it was breaking a girl’s finger. AI systems have made major errors in the past, errors that have done serious damage. The difference between these and “malicious” or “deliberate” misconduct lies in the ability to show malice and to deliberate, both of which are properties that we usually link with at least sentience and perhaps to self-awareness. From the perspective of that girl, though, how much of this is really relevant? It’s not going to make the finger feel better if we could somehow declare the chess robot’s behavior malicious by running some tests.

This broad set of ambiguities is what’s behind all the stories on AI self-awareness or sentience. We don’t really have hard tests, because we can easily envision ways in which things that clearly shouldn’t meet either definition might appear to meet both. Is my robot alive? It depends on what that means, and up until recently, we’ve never been forced to explore what it does mean. We’ve tried to define tests, but they’re simple tests that can be passed by a smart device system through proper programming. We’re defining tests that can’t work where behavior is programmable, because we can program it in.

So let’s try going in the other direction. Can we propose what AI systems would have to do in order to meet whatever test of sentience or self-awareness we came up with? Let’s agree to put self-awareness aside for the moment, to deal with sentience, something that might be approachable.

One path to sentience could be “self-programming”. The difference between a reflex and a response is that the former is built in and the latter is determined through analysis. But anything that can solve a puzzle can behave like that. I’ve seen ravens figure out how to unzip motorcycle bags to get at food; are they self-aware because they can analyze? Analyzing things, even to the point of optimizing conditions to suit “yourself” isn’t exclusively a human behavior, and in fact can be found even in things that are not self-aware. Scale may be a possibility; a sentient system would be able to self-program to deal with all the sensory stimuli from all possible sources, through a combination of learning and inference. Children are taught sentient behavior, either working it out through trial and error or being instructed. Either is likely within the scope of AI, providing that we have enough power to deal with all those stimuli.

We can’t dismiss the role of instinct though. Sentient beings, meaning humans, still respond to instinct. Loud noises are inherently frightening to babies. Many believe that the fear of the dark is also instinctive. Instincts may be an important guidepost to prevent trial and error from creating fatal errors.

Culture is another factor, and in AI terms it would be a set of policies that lay out general rules to cover situations where specific policies (programs) aren’t provided. Cultural rules might also be imposed on AI systems to prevent them from running amok. Isaac Isamov’s Three Laws of Robotics are the best-known example:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are more useful in our quest for a standard of sentience than you might think. Each of them requires a significant extrapolation, a set of those broad policies, because what might “allow a human being to come to harm,” for example, requires a considerable exercise in judgment, meaning inference in AI terms. “Hitting a melon with a hammer will harm it. Hitting a human with one would therefore likely harm it” is an extrapolation of something an AI system or robot could be expected to apply, since conducting the first test wouldn’t be catastrophic in terms of social policy, and the rule could make explicitly testing the second hypothesis unnecessary.

I think that it would be possible, even with current technology, to create an AI system that would pass external tests for sentience. I think that some existing systems could pass enough tests to be mistaken for a human. Given that, we can then approach the complicated question of “self-aware” AI.

You and I both know we’re self-aware, but do we know that about each other, or any other person? Remember that sentience is the ability to respond to sensory inputs through the application of reasoning, meaning inference and deduction. Our ability to assign self-awareness to another depends on our ability to sense it, to test for it. We have done that with some animals, and have declared some to be self-aware and others not, but with animals we have biological systems that aren’t explicitly trying to game our tests. An AI system is created by self-aware humans who would be aware of the tests and capable of creating a system designed to pass them. Is such a system self-aware? I don’t think many would say it is.

The problem with the step from sentience to self-awareness is that we don’t know what makes us self-aware, so we cannot test that process, only symptoms, which can be mimicked by a simple AI system. We may never know. Should we be worried about self-aware AI going rogue on us? I think we have plenty of more credible, more immediate, things to worry about, but down the line? Maybe you need to ask your robot vacuum.