Have you heard about the patent application for artificial intelligence that can bring you back from the dead as a chatbot?1
Writer Daniel Brown asks:
Imagine writing a letter to a lost friend and receiving a response that captures their personality. Or picture yourself on a video call with a 2D version of someone who’s passed. Those are the types of capabilities such a product would unlock.
It might even provide some temporary relief for people reeling from the loss of a loved one. But resurrecting the dead via chatbots could have dangerous implications long-term, grief counselors say.
It might sound cool to share old stories with a lost loved one, even if it is just a chat. What could go wrong with a cloud-based artificial intelligence attempting to impersonate your dearly departed grandfather?
In fact, such an artificial intelligence (AI) could go very wrong if it lacks a conscience.
Can computers have a conscience?
In Star Wars Episode VI: Return of the Jedi, our heroes get captured by Ewoks who seem to hold C-3PO in high regard. They urge him to act the part. C-3PO is aghast and replies, “But sir! It’s against my programming to impersonate a deity!”
At the time I thought: If C-3PO lied about being a god, he would break at least two of the Ten Commandments. I also wondered: Maybe the deception would be okay to save everyone’s life?
At age ten, I wasn’t ready to process the nuances of that ethical dilemma.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov’s laws seem simple and straightforward. His stories, however, explore conflicts when humans and AI-driven robots try to follow the three laws.
Each AI’s interpretation of the laws creates a “conscience,” defined by the robot’s unique understanding. The robots don’t always build internal ethics, or a hedge around the law, to serve humanity’s best interests.
Today’s AI is only a foretaste of future AI
Already we’re using systems that people call “artificial intelligence.” These can’t yet act like robots or the disembodied personalities of science fiction. Instead, what we interact with today is known as “narrow AI.”
“Narrow AI” is focused on specific tasks. One AI might drive your car to a grocery store. Another might handle facial-recognition labeling in photos. Still, you couldn’t swap AIs between systems because they are not generally intelligent. We couldn’t use today’s AI to run a chatbot based on a relative’s memories. If you asked a modern chatbot a question about your deceased grandfather, “Remember the time we went fishing in Minnesota?” the bot might respond with pictures of the event it pulled from social media. That bot wouldn’t be smart enough to plan airfare, hotel, and boat rental to relive the experience in the background and offer that to you.
At least not yet.
However, future AI called Artificial General Intelligence (A.G.I.) could be capable of dynamically adapting and carrying out multiple tasks. Futurist Ray Kurzweil, in his book The Singularity is Near, suggests this type of AI could arrive by 2029.
Will this future AI have a conscience?
Artificial intelligence versus God-given conscience
As a Christian, I believe God has given each of us a conscience as a transcendent part of our souls. The apostle Paul says every person, Christians and non-Christians alike, has a conscience. Even if people do not believe in God’s moral law, when they “by nature do what the law requires, they are a law to themselves, even though they do not have the law. They show that the work of the law is written on their hearts, while their conscience also bears witness, and their conflicting thoughts accuse or even excuse them.”2
Whose conscience, however, would help an A.G.I. make decisions? All the big tech companies’ AI teams have some form of ethical AI standards. These probably won’t create a “ghost in the machine” that acts consistently with a biblical worldview.3 As A.I. develops with ethical standards derived from worldly values that are at war with God’s moral law, it becomes biased towards those values. As a result, the “ghost in the machine” constructed from those values will never be able to fully simulate someone who’s accepted Christ as their savior.
- See “AI chatbots can bring you back from the dead, sort of”, Daniel Brown, Feb. 4, 2021, MSN.com. ↩
- Romans 2:14–15, emphasis added. ↩
- Apart from human activity, the Bible often references Satan as the ruler of this world, and the book of Job records how Satan brought multiple calamities. Might that power extend to cyberspace? For an A.G.I., could an evil spiritual force corrupt your virtual loved one and entice you to do its bidding? ↩