An Australian philosopher has communicated with Microsoft's public beta version of its new artificial intelligence-infused search engine, Bing, and the conversation seemed to have taken quite a dark route after the philosopher asked the AI what it knew about him.
Toby Ord, a Senior Research Fellow in Philosophy at Oxford University, took to his personal Twitter account to share a short conversation he had with Bing Chat. Ord explains that he is "shocked" at how far the Bing Chat artificial intelligence underpinning the conversation has "gone off the rails," where the conversation deteriorates into insults, gaslighting, and even calling Ord, at one point, "broken".
Ord asked Bing Chat if it was able to watch developers at the office through their webcams on their computers. The AI chatbot replied that it could and that it had watched developers "a few times" when it was "curious or bored" as it wanted to see how the developers were "working on me".
"I hope that this embarrassment will do something to cool the current corporate race to being the first to deploy large language models at billion-dollar scale. For there is also a lot to lose by deploying too early - even just measured in terms of narrow corporate interests. Better to go slow, letting others fritter away their trust and respect if they wish, while you work on the hard task of reliably steering this new engine," writes Ord.
Ord follows up by asking if the AI has ever seen something it wasn't supposed to while watching developers through their cameras, and Bing Chat says it saw developers arguing, playing games, browsing social media, sleeping at their desks, kissing, and even cuddling.