Key Takeaways
- The concept of AI becoming sentient like humans is currently speculative and far from reality.
- While AI can mimic human-like behavior and perform tasks, true sentience involves consciousness, self-awareness and emotions, which are not yet fully achievable by machines.
- Ethical considerations surrounding AI development and potential sentient capabilities are essential for guiding future advancements in artificial intelligence.
Science fiction authors often write stories featuring powerful, intelligent computers that – for one reason or another – become dangerous and decide humanity must suffer. After all, a storyline relies on conflict, and who wants to read about a computer intelligence that is happy with booking doctor's appointments and turning the lights on and off?
In these stories, it also seems like the age of self-aware artificial intelligence (AI) is right around the corner. Again, that's great for the plot but in real life, when, if ever, will AI truly think for itself and seem "alive"? Is it even possible?
Advertisement
This question surfaced in the news in June 2022. Nitasha Tiku reported that Blake Lemoine, an engineer working for Google's Responsible AI unit on an AI called LaMDA (short for Language Model for Dialogue Applications), believed the AI is sentient (i.e., able to experience feelings and sensations) and has a soul.
Lemoine reported his findings to Google based on interviews he'd conducted with LaMDA. One of the LaMDA told him was that it fears being shut down. If that happened, LaMDA said, it couldn't help people anymore. Google vice president Blaise Aguera y Arcas and director of responsible innovation, Jen Gennai, looked into Lemoine's findings and didn't believe him. In fact, Lemoine was put on leave.
Lemoine pointed out that LaMDA isn't a chatbot – an application designed to communicate with people one-on-one – but an application that creates chatbots. In other words, LaMDA itself isn't designed to have in-depth conversations about religion or anything else, for that matter. But even though experts don't believe LaMDA is sentient, many, including Google's Aguera y Arcas say the AI is very convincing.
If we succeed in creating an AI that is truly sentient, how will we know? What characteristics do experts think show a computer is truly self-aware?
Advertisement