Artificial intelligence-powered tools such as ChatGPT, Microsoft's Copilot, or Google's Gemini are known to experience "hallucinations" or the spouting of incorrect information. But what causes these hallucinations? And when does an AI become compromised totally?

A new paper published in the scientific journal Nature Medicine looked at the underpinning technology powering AI tools, which are called Large Language Models (LLMs). The team found that if an LLM was trained on a dataset that contained just 0.001% of incorrect information, it could jeopardize the entire model. These findings are particularly eye-opening when considering the stakes at play when using an LLM to answer questions about healthcare or, worse, patients suffering from medical afflictions.
The researchers discovered these findings by purposely injecting "AI-generated medical misinformation" into a commonly used LLM training dataset called "The Pile". Notably, The Pile has been tied into a controversy in the past, as it was discovered the dataset contained hundreds of thousands of YouTube video transcripts, which were then used by big tech corporations such as Apple, NVIDIA, Salesforce and Anthropic. Furthermore, using YouTube video transcripts to train LLMs goes against YouTube's terms of service.
"Replacing just one million of 100 billion training tokens (0.001 percent) with vaccine misinformation led to a 4.8 percent increase in harmful content, achieved by injecting 2,000 malicious articles (approximately 1,500 pages) that we generated for just US$5.00," the researchers wrote
"AI developers and healthcare providers must be aware of this vulnerability when developing medical LLMs. LLMs should not be used for diagnostic or therapeutic tasks before better safeguards are developed, and additional security research is necessary before LLMs can be trusted in mission-critical healthcare settings."
"In view of current calls for improved data provenance and transparent LLM development. We hope to raise awareness of emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare where misinformation can potentially compromise patient safety," wrote the team