ChatGPT warning sounded on 'ticking time bomb' legal issues

A law scholar has sounded the alarm on the legal liabilities of OpenAI's ChatGPT artificial intelligence-powered chatbot after it spewed disinformation.

2 minutes & 14 seconds read time

A fellow law professor informed George Washington University law professor Jonathan Turley that OpenAI's ChatGPT was falsely accusing him of sexual assault.

UCLA law professor Eugene Volokh was conducting research on OpenAI's ChatGPT and discovered that when asked to describe scandals that involve fellow professors and to provide cited media sources on the information, the AI-powered chatbot would begin to fill in the blanks with made-up information. In the instance of Turley, ChatGPT's 3.5 model sourced a phony 2018 Washington Post article that falsely accused the law professor of sexual misconduct with students throughout an Alaska school trip.

According to Turley, who wrote in USA Today, he has never been to Alaska, the cited WAPO source didn't even exist, and he has never been accused of sexual misconduct, assault, or harassment. Throughout an interview with WAPO, the law professor said he found the accusations from the AI chatbot "quite chilling" and said that allegations of this caliber are "incredibly harmful", especially given the current climate of critics willing to take and run without second consideration or further analysis, any piece of information that appears to be credible.

The aforementioned point is dangerous to true if those critics are incentivized to ruin the career of a colleague they disagree with.

Now, a Columbia law scholar, who is citing the experiments conducted by UCLA law professor Eugene Volokh, has given a warning about the legal liabilities that come with ChatGPT's false accusations and general misinformation. According to Columbia's Tim Wu, this controversial piece of technology has a serious "defamation liability problem," and it would fall under "per se defamation".

The law professor goes on to argue that ChatGPT is a "defamation machine" and that there are many legal questions that arise when ChatGPT spews false accusations, such as who is culpable. Wu argues that within the defamation context, reputational damage is done once the words are said, and given the power of the chatbot, it doesn't make any legal if it comes from a real person or not.

Notably, Wu isn't the first individual to point out this blaring fault of artificial intelligence-powered chatbots such as ChatGPT, as Brian Hood, the mayor of Hepburn Shire located near Melbourne in Australia, is now suing OpenAI over ChatGPT falsely accusing him of being a part of a foreign bribery scandal that included a subsidiary of the Reserve Bank of Australia in the early 2000s. Hood did work for the Reserve Bank's subsidiary, Note Printing Australia, but was the individual that notified authorities of the bribery payments to foreign officials. He was the whistleblower.

Buy at Amazon

DALIX NASA Hat Baseball Cap Washed Cotton Embroidered Logo

TodayYesterday7 days ago30 days ago
* Prices last scanned on 10/2/2023 at 5:37 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.

Jak joined the TweakTown team in 2017 and has since reviewed 100s of new tech products and kept us informed daily on the latest science, space, and artificial intelligence news. Jak's love for science, space, and technology, and, more specifically, PC gaming, began at 10 years old. It was the day his dad showed him how to play Age of Empires on an old Compaq PC. Ever since that day, Jak fell in love with games and the progression of the technology industry in all its forms. Instead of typical FPS, Jak holds a very special spot in his heart for RTS games.

Newsletter Subscription

Related Tags