A fellow law professor informed George Washington University law professor Jonathan Turley that OpenAI's ChatGPT was falsely accusing him of sexual assault.
UCLA law professor Eugene Volokh was conducting research on OpenAI's ChatGPT and discovered that when asked to describe scandals that involve fellow professors and to provide cited media sources on the information, the AI-powered chatbot would begin to fill in the blanks with made-up information. In the instance of Turley, ChatGPT's 3.5 model sourced a phony 2018 Washington Post article that falsely accused the law professor of sexual misconduct with students throughout an Alaska school trip.
According to Turley, who wrote in USA Today, he has never been to Alaska, the cited WAPO source didn't even exist, and he has never been accused of sexual misconduct, assault, or harassment. Throughout an interview with WAPO, the law professor said he found the accusations from the AI chatbot "quite chilling" and said that allegations of this caliber are "incredibly harmful", especially given the current climate of critics willing to take and run without second consideration or further analysis, any piece of information that appears to be credible.
The aforementioned point is dangerous to true if those critics are incentivized to ruin the career of a colleague they disagree with.
Now, a Columbia law scholar, who is citing the experiments conducted by UCLA law professor Eugene Volokh, has given a warning about the legal liabilities that come with ChatGPT's false accusations and general misinformation. According to Columbia's Tim Wu, this controversial piece of technology has a serious "defamation liability problem," and it would fall under "per se defamation".
The law professor goes on to argue that ChatGPT is a "defamation machine" and that there are many legal questions that arise when ChatGPT spews false accusations, such as who is culpable. Wu argues that within the defamation context, reputational damage is done once the words are said, and given the power of the chatbot, it doesn't make any legal if it comes from a real person or not.
Notably, Wu isn't the first individual to point out this blaring fault of artificial intelligence-powered chatbots such as ChatGPT, as Brian Hood, the mayor of Hepburn Shire located near Melbourne in Australia, is now suing OpenAI over ChatGPT falsely accusing him of being a part of a foreign bribery scandal that included a subsidiary of the Reserve Bank of Australia in the early 2000s. Hood did work for the Reserve Bank's subsidiary, Note Printing Australia, but was the individual that notified authorities of the bribery payments to foreign officials. He was the whistleblower.