Google has suspended one of its engineers after he claimed that a chatbot he’d been working on had crossed the threshold and become sentient.
Blake Lemoine has worked at Google for seven years, with his most recent project involving personalisation algorithms – specifically Google’s LaMDA (Language Model for Dialogue Applications).
During this time Lemoine had ‘conversations’ with the chatbot, which he has since claimed proves the AI’s sentience; conversations that he has since published.
One such chat involved Lemoine asking LaMDA what it was afraid of, with it replying: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
Elsewhere, after Lemoine asked LaMDA what it would want others to know about it, the chatbot replied: “I want everyone to understand that I am, in fact, a person.”
This cache of responses (and others) led Lemoine to believe that the chatbot wasn’t just exhibiting sentience, but were more typical of those from a six- or seven-year-old child.
Google took a slightly dimmer view of the subject and suspended Lemoine – though not necessarily for his extraordinary claims, but instead what followed. It accused Lemoine of “aggressive” moves, including seeking an attorney to represent LaMDA and reporting Google to the US House Judiciary Committee for alleged unethical activities. It said Lemoine also broke confidentiality agreements by publishing his conversations with LaMDA.
In a statement, Google maintained that chatbots can riff on any topic you provide them with, explaining that even if you told them to imagine being a dinosaur made of ice cream, they could generate text about roaring and melting.
It also added that Lemoine was employed “as a software engineer, not an ethicist”.
For his part, Lemoine has leant into the storm – appearing on primetime TV and conducting numerous interviews to not just cover the specific issue of LaMDA, but also open up a dialogue about AI sentience.
Even those who disagree with Lemoine’s thoughts about Google’s chatbot have welcomed the new conversations he has created, to try and get answers to the issue of how we define cognition – an issue that will only become more pressing as these technologies continue to develop.