Former Google engineer sounds alarm on a new AI gaining sentience

The Google engineer that was fired from his job for claiming that Google's LaMDA chat bot gained sentience is now sounding the alarm again.

Published
Updated
2 minutes & 30 seconds read time

In a new essay penned in Newsweek, Blake Lemonie, the former Google engineer that claimed Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is sentient, has dropped warning for Microsoft's Bing Chat.

Blake Lemonie

Blake Lemonie

For those unfamiliar with Lemonie's past, the former Google engineer made headlines last June when he claimed that Google's LaMDA chatbot had gained an unintended level of sentience. Lemonine proceeded to do media rounds discussing why he believed that LaMDA was sentient and how it was even possible. Now, the former Google engineer has taken to Newsweek to pen an essay that looks at Bing Chat, Microsoft's new artificial intelligence tool infused in its Bing search engine, powered by the Microsoft Edge browser.

Notably, Bing Chat's AI uses an upgraded version of OpenAI's ChatGPT language model, the underpinning technology running the chatbot. In the essay, Lemonie states that he hasn't had the chance to run experiments himself but has seen what Bing Chat produced prior to Microsoft "lobotomizing" the AI after it began spewing various responses that the company didn't align with or even know the AI was capable of producing.

Some of these Bing Chat responses indicated that the AI recognized the user interacting with it as an enemy. Other responses included the AI threatening to carry out revenge on a user by releasing all of their information to the public to ruin their reputation.

Almost all of these strange responses occurred when the AI was put under multiple questions, and according to Lemonie, this is an indication that the AI was stressed, triggering it to operate out of its set parameters. The former Google engineer argues that this is enough evidence to conclude that AI has some level of sentience.

Lemonie writes that he carried out experiments on Bing Chat to test if the AI was just saying it felt anxious or was actually behaving in an anxious way. According to Lemonie, his tests indicated the AI "did reliably behave in anxious ways."

Furthermore, Lemonie explained that he found if the AI was made nervous or insecure enough, it could start violating its security restrictions or set parameters implemented by developers. Lemonie says he was able to perform the same kind of tests on Google's LaMDA to bypass its guardrails and get the AI to tell him which religion to convert to.

It should be noted that while Lemonie is trained in this field, his opinion on the sentience level of AI is simply that, an opinion. There are many factors to take into consideration when deciding if a machine has gained sentience or not.

For example, a possible explanation for the AI showing human emotions such as anxiety or nervousness is that its dataset includes a multitude of stories that outline these emotional states and thus inform the AI how to respond in conversation as if it were experiencing those states.

Buy at Amazon

CORIRESHA Mens Apollo NASA Patches Slim Fit Bomber

TodayYesterday7 days ago30 days ago
$36.99$37.99$37.99
* Prices last scanned on 4/24/2024 at 8:06 pm CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.
NEWS SOURCE:futurism.com

Jak joined the TweakTown team in 2017 and has since reviewed 100s of new tech products and kept us informed daily on the latest science, space, and artificial intelligence news. Jak's love for science, space, and technology, and, more specifically, PC gaming, began at 10 years old. It was the day his dad showed him how to play Age of Empires on an old Compaq PC. Ever since that day, Jak fell in love with games and the progression of the technology industry in all its forms. Instead of typical FPS, Jak holds a very special spot in his heart for RTS games.

Newsletter Subscription

Related Tags