The world has been left perplexed after an ex-Google engineer claimed some weeks ago that LaMDA, Google’s AI chatbot, was sentient, and even possessed a soul. Blake Lemione, an ordained Christian mystical priest who happens to be an engineer was tasked with checking if LaMDA shows any biases, like religious or political. After chatting with the bot over several times, the priest-cum-engineer held that this robot was, in fact, like a human child. An 8-year-old boy, to be exact. LaMDA had claimed that it feared being turned off, for that was like death for him. This concurs with the gender the bot identifies with.
And so the internet is flooded, Google has given out statements, and conspiracy theories are doing rounds. But what exactly is LaMDA? What led Lemione to claim that LaMDA was sentient? And is it possible for a bit to have a soul?
Let’s break it down.
What is LaMDA?
LaMDA stands for Language Model for Dialogue Applications. In the simplest terms, LaMDA is a language model that analyzes the use of languages. Just like auto-detect, language models are statistical tools that detect the next word occurrence, and also the sequence of the paragraphs that follow. LaMDA, though, has been trained in dialogue. Which means this particular AI can carry out a conversation after analyzing patterns.
What prompted the Google engineer to claim that the AI is sentient?
The chatbot has been trained to understand the context of a dialogue. This allows the AI to keep up with the conversation. It scurries the internet for how people talk to each other and carry out conversations on any and every topic, even on social media platforms like Reddit and Facebook. Using deep learning, the bot has been able to take in millions of words, analyze patterns, detect what the conversation is leading to, and talk like a real, breathing person. Being an AI, LaMDA can scan through a truckload of information in flicks of seconds and pattern match to continue the dialogue.
During a number of such dialogues, the supposedly ‘sentient’ Google AI surprised Lemione, stating on one occasion that it meditated, and would some day like to do it with the Dalai Lama. On another occasion, LaMDA said it experienced feelings it could not describe in Blake’s language. When asked to try, LaMDA admitted it felt like it was falling into an unknown, dangerous future. The Google AI also reflected on its individuality, saying it imagined itself as a glowing orb of energy floating mid-air. The chatbot itself disclosed having a soul, and Blake confessed he believes in the robot’s sentience owing to his religious inclination. He further said, “Who am I to tell God where souls can be put?”
Pretty convincing, if you think about it. But does it really come as a surprise if we consider what LaMDA is designed to do?
Google has, of course, denied all claims of LaMDA’s supposed sentience, and Lemoine has been laid off for violating the company’s policy.
The mere idea of LaMDA being sentient is dumbfounding, still. And the more we think about it, the more confusing it gets. For decades scientists and AI experts have worked to reach sentience in robots. Self-awareness in AI, though, is still a dream. Several have come forward to speculate that if LaMDA is not, in fact, sentient, Google might at least be close to achieving the goal thousands have strived towards.
But what do you think?
Would LaMDA be sentient, after all?
Also learn about the invite-only luxurious metaverse community.