What is it that truly separates us from the robots?
It’s more than just a self-awareness. OK Google has a self-awareness of a certain sort, but it’s certainly not a person.
We people (and many animals) have a consciousness, a sentience, a ‘qualia,’ that even a robot or computer that passed the Turing Test might not. And whether or not artificial intelligence can have this same sentience or life-force is the question many sci-fi films answer in the affirmative. (i.e. Ghost in the Shell, Her, 2001: A Space Odyssey, etc.)
A scene in Steven Spielberg’s Artificial Intelligence: AI illustrates the question.
Professor Hobby: Tell me, what is love?
Robot: Love is widening my eyes a little bit… and quickening my breathing a little… and warming my skin… and touching my…
This robot is an example of what is called a “philosophical zombie.” The lights are on, but nobody’s home. The robot is intelligent, self-aware, and might even self-replicate. But it’s not alive – at least not in any of the ways that deeply matter.
Professor Hobby: You see, what I’m suggesting is that love will be the key… by which they acquire a kind of subconscious never before achieved. An inner world of metaphor, intuition, a self-motivated reasoning, of dreams.
In order to figure out if a device made from synthetic material can have this sentience, we have to figure out why creatures made of organic material can have it. And that is what philosophers call the “hard problem of consciousness.”
The tidiest conclusion is to deny the problem exists. That way there is no qualitative difference between us and a robot (or a thermostat). Philosopher Daniel Dennett does just that. In both his book Consciousness Explained and his popular TED Talk, he purports that this subjective “qualia” is just an illusion. You don’t actually have a self in the way you think you do. The sum of your parts don’t add up to something immaterial.
The primary argument Dennett makes against sentience in his book is that “qualia” is difficult to explain and is in many ways ineffable. I don’t find that argument convincing. I’m quite a skeptic, but just because you have trouble describing something with our currently knowledge doesn’t mean it does not exist. Especially when I am constantly being provided with evidence of its existence! Descartes said it – cogito ergo sum. The fact that I have a subjective, conscious self is, in fact, the only thing I can be truly sure of. It’s much more likely that Dennett has made an error in reasoning than it is that I am a philosophical zombie.
Consciousness poses a challenge to the materialistic views that Dennett holds. He is convinced that matter and physical processes are all that exist in the universe. Consciousness doesn’t easily fit into that world view, and so denial is more useful than confronting the “hard problem” head on.
But you don’t necessarily need to reject materialism to think the problem exists. Dennett’s fellow pop atheist Richard Dawkins  subscribes to the theory of emergent consciousness. According to this idea, activity patterns in the brain give rise to consciousness. The new thing (consciousness) can be said to emerge from the old, simpler thing (brain activity) in a way analogous to how hurricanes emerge from air temperature activity. But unlike meteorology, the idea of emergent consciousness isn’t a scientific one. At least at this point, it’s at the level of philosophical possibility. (I recommend Douglas Hofstadter’s I Am a Strange Loop for a longer, better exploration of the idea.)
Emergent consciousness is an attractive idea. It makes intuitive sense, and it seems to allow for a materialistic worldview without the need to deny experiential evidence of sentience.
For the sake of argument, let’s assume it’s true. This model could provide the possibility of artificial intelligence that is conscious. If my brain patterns create consciousness and your brain patterns create consciousness, why couldn’t a robot’s brain patterns create consciousness?
My first objection is that brains and computers think differently. It’s not simply a matter of complexity or processing power or speed. I readily imagine that one day – maybe this century, maybe not – computers or synthetic networks will exist that have more complexity and can think faster than the human brain. I think there’s a qualitative difference between us and “them.”
Logic Gates in computer programming.
What most of us see when we interact with machines is the physical output. Behind the user interface is code, which describes magnetic poles and whether an electric switch is on or off. Scientists have wired computer chips together in ways that resemble neurons, but they’re functionally still chips, not brains. Ditto for possible complex quantum computers. Although computers can be programmed to learn natural language and mimic human irrationality, they themselves are rational.
People are capable of thinking through algorithms and a small few of us can process binary, but that’s not how we normally exist. We’re not on/off. A calculator will display an error message if asked to do something irrational. Human beings are completely capable of believing two contradictory things at the same time – to Mr. Dawkins’s constant frustration.
Logic gates do not apply to Ophelia
Human existence is rarely rational. It’s difficult to even describe it in a rational manner. The most complete way to describe a computer program might be to look at its code.The most complete way to describe a human life might be to see a performance of Shakespeare’s works. I’m not trying to be romantic. Our minds are much better at understanding and remembering through fictional stories than through straight facts.
And these stories communicate things we have trouble putting not just into rational sentences, but into any words at all. Computer language has precise, exact terms – a forgotten ‘>’ can wreck havoc on an entire program. Human language is fungible and elastic, with words that evade universal definition and purpose. Although OK Google might learn some of the “rules” of human language, it always has to translate it back into its own binary code of 1s and 0s. My thought, however, is in the same chaos as my words. There’s something about consciousness that seems fundamentally ineffable, which may put it forever beyond the access of a mathematical system.
There are many more elemental problems to consider with the possibility of artificial consciousness. Does the type of matter that makes up my brain make a difference? Is emergent consciousness a special property of oxygen, hydrogen and carbon? Does consciousness require a spinal cord? In Ghost in the Shell, the “Puppet Master” appears to achieve sentience without any of those things.
Is it possible that consciousness a special property to Earth-live that can be inherited, but not duplicated? Are there undiscovered physical forces that exist in the brain or body that give rise to consciousness? If so, will these forces be discovered one day or not? Could they be replicated with synthetic material or not?
These are just some of the questions we’d need to answer before building a conscious machine. I don’t think it’s happening anytime soon. There are some philosophers who think we’ll never figure out how conscious works. The is the position of the “New Mysterian” philosophers, including Colin McGinn. New Mysterianism doesn’t claim there is necessarily anything supernatural about consciousness. Dennett and Dawkins’ fellow anti-theist popularizer Sam Harris endorses this position:
“But couldn’t a mature neuroscience nevertheless offer a proper explanation of human consciousness in terms of its underlying brain processes? We have reasons to believe that reductions of this sort are neither possible nor conceptually coherent. Nothing about a brain, studied at any scale (spatial or temporal), even suggests that it might harbor consciousness. Nothing about human behavior, or language, or culture, demonstrates that these products are mediated by subjectivity. We simply know that they are—a fact that we appreciate in ourselves directly and in others by analogy.”
Outside of committed materialists, most people across the world assume a mystical basis for consciousness. It is so unlike anything else we observe in the world, it seems supernatural. When we describe consciousness, we find ourselves most readily drawn to speaking in spiritual language – whether from Bronze Age religious texts or New Age YouTube videos. Some philosophers are using words like “pansychism” or “panprotoexperientialism” and invoking “quantum mechanics” and an almost magical term.
In science fiction, the world of film almost can’t help itself but use this same language to portray the acquisition of consciousness by robots. Stanley Kubrick showed the evolution of consciousness as something beyond our ability to grasp by giving astronaut David Bowman a mystic vision beyond even that of the medieval saints.
The Kubrick-developed A.I. Artificial Intelligence has a similar structure as 2001, and a similar resolution. In the quest to “become a real boy,” David fixates on the Pinocchio story, and believes the Blue Fairy can grant him personhood. David’s 2,000-year vigil in front of a wooden carving of her at Coney Island resembles a Marian devotion, while his interaction with the evolved robots once he is thawed from the ice brings us back into science fiction. The future synthetic creatures he meets are, like David, longing to bridge the gap between themselves and humanity. What they’re missing is a soul. Engineering and science aren’t enough; Kubrick & Spielberg had to add an ineffable element. They found the answer in David’s unconditional love for his mother. Experiencing love, as Professor Hobby explained, is the difference-maker between “real boys” and robots.
Whether we visual it as tapping into a global consciousness, or being endowed with a soul by a creator, a robot’s awakening from machinehood to personhood is a gap to be bridged. I’m sympathetic to the New Mysterians on this one. We, as a species, can’t even describe the experience of consciousness fully or clearly. I doubt we ever will. And we can’t translate poetry or religion into computer code – at least not without losing what gives it fire.