Ghost in the Machine: When Does AI Become Sentient?

By: Chris Pollette  | 
Blake Lemoine
Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, California on June 9, 2022. Martin Klimek for The Washington Post via Getty Images

Key Takeaways

  • The concept of AI becoming sentient like humans is currently speculative and far from reality.
  • While AI can mimic human-like behavior and perform tasks, true sentience involves consciousness, self-awareness and emotions, which are not yet fully achievable by machines.
  • Ethical considerations surrounding AI development and potential sentient capabilities are essential for guiding future advancements in artificial intelligence.

Science fiction authors often write stories featuring powerful, intelligent computers that – for one reason or another – become dangerous and decide humanity must suffer. After all, a storyline relies on conflict, and who wants to read about a computer intelligence that is happy with booking doctor's appointments and turning the lights on and off?

In these stories, it also seems like the age of self-aware artificial intelligence (AI) is right around the corner. Again, that's great for the plot but in real life, when, if ever, will AI truly think for itself and seem "alive"? Is it even possible?

Advertisement

This question surfaced in the news in June 2022. Nitasha Tiku reported that Blake Lemoine, an engineer working for Google's Responsible AI unit on an AI called LaMDA (short for Language Model for Dialogue Applications), believed the AI is sentient (i.e., able to experience feelings and sensations) and has a soul.

Lemoine reported his findings to Google based on interviews he'd conducted with LaMDA. One of the LaMDA told him was that it fears being shut down. If that happened, LaMDA said, it couldn't help people anymore. Google vice president Blaise Aguera y Arcas and director of responsible innovation, Jen Gennai, looked into Lemoine's findings and didn't believe him. In fact, Lemoine was put on leave.

Lemoine pointed out that LaMDA isn't a chatbot – an application designed to communicate with people one-on-one – but an application that creates chatbots. In other words, LaMDA itself isn't designed to have in-depth conversations about religion or anything else, for that matter. But even though experts don't believe LaMDA is sentient, many, including Google's Aguera y Arcas say the AI is very convincing.

If we succeed in creating an AI that is truly sentient, how will we know? What characteristics do experts think show a computer is truly self-aware?

Advertisement

The Imitation Game

Probably the most well-known technique designed to measure artificial intelligence is the Turing Test, named for British mathematician Alan Turing. After his vital assistance breaking German codes in the Second World War, he spent some time working on artificial intelligence. Turing believed that the human brain is like a digital computer. He devised what he called the imitation game, in which a human asks questions of a machine in another location (or at least where the person can't see it). If the machine can have a conversation with the person and fool them into thinking it's another person rather than a machine reciting pre-programmed information, it has passed the test.

The idea behind Turing's imitation game is simple, and one might imagine Lemoine's conversations with LaMDA would have convinced Turing, when he devised the game. Google's response to Lemoine's claim, however, shows that AI researchers now expect much more advanced behavior from their machines. Adrian Weller, AI program director at the Alan Turing Institute in the United Kingdom, agreed that while LaMDA's conversations are impressive, he believes the AI is using advanced pattern-matching to mimic intelligent conversation.

Advertisement

As Carissa Véliz wrote in Slate, "If a rock started talking to you one day, it would be reasonable to reassess its sentience (or your sanity). If it were to cry out 'ouch!' after you sit on it, it would be a good idea to stand up. But the same is not true of an AI language model. A language model is designed by human beings to use language, so it shouldn't surprise us when it does just that."

Advertisement

Ethical Dilemmas With AI

AI definitely has a cool factor, even if it isn't plotting to take over the world before the hero arrives to save the day. It seems like the kind of tool we want to hand off the heavy lifting to so we can go do something fun. But it may be a while before AI – sentient or not – is ready for such a big step.

Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), suggests that we think carefully and move slowly in our adoption of artificial intelligence. She and many of her colleagues are concerned that the information used by AIs is making the machines seem racist and sexist. In an interview with IEEE Spectrum, DAIR Research Director Alex Hanna said she believes at least some of the data used in the language models by AI researchers are collected "via ethically or legally questionable technologies." Without fair and equal representation in the data, an AI can make decisions that are biased. Blake Lemoine, in an interview about LaMDA, said he didn't believe an artificial intelligence can be unbiased.

Advertisement

One of the Algorithmic Justice Society's goals stated in its Mission Statement is to make people more aware of how AI affects them. Founder Joy Buolamwini delivered a TED Talk as a graduate student at the Massachusetts Institute of Technology (MIT) about the "coded gaze." The AIs she has worked with had a more difficult time reading Black faces, simply because they hadn't been programmed to recognize a wide range of people's skin tones. The AJS wants people to know how data are collected, what kind of data are being collected, to have some sort of accountability, and to be able to take action to modify the AI's behavior.

Even if you could create an AI capable of truly unbiased decision making, there are other ethical questions. Right now, the cost of creating large language models for AIs runs into the millions of dollars. For example, the AI known as GPT-3 may have cost between $11 and $28 million. It may be expensive, but GPT-3 is capable of writing whole articles by itself. Training an AI also takes a toll on the environment in terms of carbon dioxide emissions. Impressive, yes. Expensive, also yes.

These factors won't keep researchers from continuing their studies. Artificial intelligence has come a long way since the mid-to-late 20th century. But even though LaMDA and other modern AIs can have a very convincing conversation with you, they aren't sentient. Maybe they never will be.

Advertisement

Frequently Asked Questions

Can AI systems experience emotions or empathy similar to humans?
AI systems are designed to simulate humanlike behaviors and responses, but they do not possess emotions or empathy in the same way humans do, as they lack consciousness and subjective experiences.
What are the potential ethical implications of creating AI systems with sentient-like capabilities?
The development of AI systems with sentient-like capabilities raises ethical concerns regarding autonomy, accountability and the potential impact on society, requiring careful consideration and regulation.

Advertisement

Loading...