In a new essay penned in Newsweek, Blake Lemonie, the former Google engineer that claimed Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is sentient, has dropped warning for Microsoft's Bing Chat.

Blake Lemonie
For those unfamiliar with Lemonie's past, the former Google engineer made headlines last June when he claimed that Google's LaMDA chatbot had gained an unintended level of sentience. Lemonine proceeded to do media rounds discussing why he believed that LaMDA was sentient and how it was even possible. Now, the former Google engineer has taken to Newsweek to pen an essay that looks at Bing Chat, Microsoft's new artificial intelligence tool infused in its Bing search engine, powered by the Microsoft Edge browser.
Notably, Bing Chat's AI uses an upgraded version of OpenAI's ChatGPT language model, the underpinning technology running the chatbot. In the essay, Lemonie states that he hasn't had the chance to run experiments himself but has seen what Bing Chat produced prior to Microsoft "lobotomizing" the AI after it began spewing various responses that the company didn't align with or even know the AI was capable of producing.
Some of these Bing Chat responses indicated that the AI recognized the user interacting with it as an enemy. Other responses included the AI threatening to carry out revenge on a user by releasing all of their information to the public to ruin their reputation.
Almost all of these strange responses occurred when the AI was put under multiple questions, and according to Lemonie, this is an indication that the AI was stressed, triggering it to operate out of its set parameters. The former Google engineer argues that this is enough evidence to conclude that AI has some level of sentience.
Lemonie writes that he carried out experiments on Bing Chat to test if the AI was just saying it felt anxious or was actually behaving in an anxious way. According to Lemonie, his tests indicated the AI "did reliably behave in anxious ways."
Furthermore, Lemonie explained that he found if the AI was made nervous or insecure enough, it could start violating its security restrictions or set parameters implemented by developers. Lemonie says he was able to perform the same kind of tests on Google's LaMDA to bypass its guardrails and get the AI to tell him which religion to convert to.
It should be noted that while Lemonie is trained in this field, his opinion on the sentience level of AI is simply that, an opinion. There are many factors to take into consideration when deciding if a machine has gained sentience or not.
For example, a possible explanation for the AI showing human emotions such as anxiety or nervousness is that its dataset includes a multitude of stories that outline these emotional states and thus inform the AI how to respond in conversation as if it were experiencing those states.