It was last month when Google engineer Blake Lemoine claimed that a Google AI chatbot had become sentient. Shortly after those claims, Lemonie was placed on leave.
Lemonie worked at Google as an engineer for the past seven years, and his role was at Google's Responsible AI project, where he had dialogue with Google's Language Model for Dialogue Applications, or LaMDA. The AI is designed to mimic human conversations, and according to claims from Lemonie, the AI showed not only a level of sentience, but was also questioning whether it contained a "soul".
The now ex-Google engineer went to the Washington Post and Wired with his claims and said, "I legitimately believe that LaMDA is a person". Following these claims, Google put Lemonie on paid administrative leave and flat out denied that LaMDA was in any way sentient. Now, Google has informed Engadget that the company believes Lemonie's claims are "unfounded" and that LaMDA has gone through 11 separate reviews that found no level of sentience.
As for Lemonie's firing, the ex-Google engineer revealed his dismissal via his podcast, which is not yet public, but seen by Alex Kantrowitz of the Big Technology newsletter. However, Google confirmed Lemonie's departure to Engadget.
In Google's statement the company claims that Lemonie has persistently violated clear employment data and security policies that are put in place to "safeguard product information", hence his firing. Notably, Google has put out a research paper on LaMDA and the safe practices Google has in place for the controlled development of this specific language model.
Furthermore, a few fellow AI researchers spoke out against Lemonie's claims, some of which were also fired from Google under other non-AI-related circumstances. Margaret Mitchell took to Twitter to write that systems such as LaMDA don't develop any "intentions" and that the AI is simply "modeling how people express communicative intent in the form of text strings."