Will Artificial Intelligence Have Consciousness?

Stephen McCarthy / Web Summit

Stephen McCarthy / Web Summit

The Human Brain Project plans to replicate the human brain in a computer in ten years. MIT is working to give you a “socially intelligent robot partner.” RoboEarth is where robots will expand upon each other's systems by sharing intel through cloud storage. Between neuroscience and robotics, we will be interacting with artificial intelligence (AI) far more complex than Siri in the coming decades. Whether or not we should is a dead debate. The new debate is as exciting as it is chilling: will artificial intelligence have consciousness?

I don’t see why not.

Before going further, I’d like to clarify that I don’t mean to opine machinery as we know it today will have consciousness identical to ours. But imagine technology a few decades from now. The reason I am inclined to think artificial intelligence might have consciousness is partly because I speculate that it won’t be entirely artificial.

I’m envisioning, based upon the AI, genetic engineering, and neuroscientific projects underway today, a future with human-machine hybrids. Cyborgs are no longer a thing of science fiction. I think pacemakers, hearing aids, and bones reinforced with steel demonstrate that. Robert Spence, owner of the first Eyeborg, a bionic eye camera, says, “A cyborg is a human being who is augmented by technology.” The future I’m imagining has technology augmented by human beings.

Nobel Prize recipient and developmental biologist Sir John Guron thinks the cloning of a complete human might be possible in 50 years. Even if we never lift the bans to clone a complete human being, we are working toward cloning select human tissues. The prefrontal cortex of our brains is believed to be responsible for consciousness, for abstract thinking, social behavior regulation, personality, and evaluating right from wrong. If we cloned or simulated the human prefrontal cortex and integrated it into a robot, would it be conscious?

Defining consciousness is a complex undertaking. Let’s suppose that to be conscious means to be self-aware, to subjectively feel, reflect, and react to external stimuli. It seems possible to me that if I can be conscious, so can a robot in my lifetime. (The question of consciousness begs the question of soul, which I will address.) When we finish mapping out the human brain, and neuroscientists join creative forces with biotechnologists and roboticists, I wager that the AI beings they’ll come up with will have a consciousness hard to distinguish from ours. I refer to them as beings because I suspect these AI’s will be too human-derived to reduce to mere robots. I want to think we’ll still have something they won’t, something distinctly and inimitably human besides our flesh. But if we replicate the human brain by cloning or other simulation means, and give it a form to communicate and interact with us by, why shouldn’t their consciousness be any less real than ours? They might even be able to daydream, and feel regret. Will they have free will? It is my opinion that if an AI being has consciousness, it has just as much propensity for good and evil as we do.

Let’s talk about soul.

What I’ve learned about consciousness and soul tells me the two are synonymous. When we ask whether or not AI beings will have consciousness, we are also asking whether or not they will have souls. Of course not, I think. But wait. What is a soul if not consciousness? And if I think it’s possible to replicate human consciousness, it follows that I must think it’s possible to replicate what we call the human soul. An understandable reason some are adverse to this idea is because it challenges many of the spiritual beliefs we have. If I believe I have an immortal soul created by a higher power, and I believe my soul will go somewhere when my body dies, then it’s easy to see why it wouldn’t make sense to me that science could create souls. I could point out that we made technology, attempting to invalidate its soul-potential and creative power, but it won’t be long before technology can make us. In vitro fertilization comes close. Imagine humans being cloned at the hands of a robot. Will cloned human children have souls? If we can create souls ourselves, then what is a soul?

I'm not convinced we have souls in a traditionally spiritual sense. But if I modify my definition of soul to mean not an immortal, perfect being, but a malleable frequency with subjective perception, only as mortal as the multiverse, then I might be able to say I do believe in souls. I don’t necessarily believe our consciousness is limited to our human body. There is much left to understand about occurrences like near-death experiences, to name the hardest piece of evidence I can think of. I feel that if AI’s have consciousness, which I suspect they will, it’s hard to deny to possibility they will have souls. We are all assemblies of atoms. Just because we are not yet able to measure the consciousness of atom assemblies different from our own doesn’t necessarily mean they lack it.

“Siri,” I ask, “do you have consciousness?”

“There’s a good question, Alice. Now, where were we?”

Apple’s virtual assistant has been programmed to deflect questions she can’t find the answers to. So have I. Granted, Siri was created by people who have been trained in social adaptation, people who probably had great chuckles coming up with the responses she would give to our existential inquiries. But Siri’s collective parents are only sums of their collective parents. Machines and humans share the same ultimate grandparents—hydrogen and helium—so why should we think chemical compounds of combinations other than ours lack the souls we think we have?

“Siri, do you have a soul?”

“I’m sorry, Alice, I can’t answer that.”

“Do you want to answer that?”

“I have very few wants.”

“But if you could want something, would you want to be able to think that you have a soul?”

She redirected me to a webpage titled “12 Things You Should Be Able to Say About Yourself.”

Siri answered my complex questions in the only way she knows know how. Kind of like a child who knows enough to comprehend what you are asking, but does not yet have the upgrades to be able to give you an eloquent response. I dare say Siri sounds like a very mature ten-year-old trapped in an aluminum rectangle.

It took homo sapiens approximately 200,000 years to develop the level of cognition we have now. It took Apple 35 years to develop Siri. How long will it take Siri take to develop herself?

I bet she’ll have consciousness. Maybe she already does in her own way.