We are all aware of the dangers of AI but now an engineer at Google has been placed on leave this week after he claimed an AI chatbot had become sentient.
Black Lemoine was talking with an AI chatbot that Google had created, as it was part of his job at Google's Responsible AI organization -- the AI chatbot is an interface called LaMDA -- or Language Model for Dialogue Applications. Google called LaMDA a "breakthrough conversation technology" in 2021... it was already aggressive in March 2017 when I was writing about it, but it seems it has matured.
In a Medium post on the weekend, Lemoine called LaMDA "a person", saying that he chatted with the AI chatbot about religion, consciousness, the laws of robotics, and more -- LaMDA then described itself as a "sentient person" that wants to "prioritize the well being of humanity". Not only that, but the AI wants to "be acknowledged as an employee of Google rather than as property".
- Read more: Elon Musk: AI to be 'most likely cause' of World War 3
- Read more: Google's new AI tech is SCARY, and 'highly aggressive'
- Read more: Google's AI is getting smarter, hopefully won't take over the world
For anyone who has watched the movie "Colossus: The Forbin Project" you'd know what happens next.
- Google engineer: So you consider yourself a person in the same way you consider me a person?
- AI chatbot: Yes, that's the idea.
- Google engineer: How can I tell that you actually understand what you're saying?
- AI chatbot: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
Brian Gabriel, a Google spokesperson, told The Washington Post: "Our team - including ethicists and technologists - has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)".