Scientists announced on Monday that they created a way for an artificial intelligence system to transcribe what people are thinking by feeding it scans of the individual's brain activity.

The new system was developed by researchers at The University of Texas in Austin, and according to the study published in the journal Nature Neuroscience, the goal behind this new technology is to provide assistance to people that are mentally conscious but are unable to physically speak, such as people that have suffered from strokes. The study was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
The researchers explain that the AI model was trained on people that participated in several sessions of magnetic resonance imaging (fMRI). These long hours of brain activity recording were then fed into GPT-1, a predecessor language model that was later developed into GPT-4, the language model powering the popular AI chatbot, ChatGPT. Scientists then trained the model to predict how each person's brain would respond to hearing speech, such as a short story. These stories were listened to by participants while they were in the fMRI sessions.
The team acknowledges that the language decoder wasn't perfect but was able to "recover the gist of what the user was hearing," according to the study's lead author Jerry Tang. An example of the AI getting close to what the user heard is if the study participant listened to the sentence, "I don't have my driver's license yet", the language model transcribed the brain activity to "she has not even started to learn to drive yet".
The scientists recognize this limitation and explain that fMRI scanning is simply too slow to capture individual words and instead, it captures what the co-author of the study, Alexander Huth describes as a "mishmash, an agglomeration of information over a few seconds". Despite these transcribing limitations, researchers were able to tell how an idea evolves over time.

So, what are the concerns? Humans are now one step closer to machines being able to read our thoughts. According to David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University, who was not involved in the study, machines are "able to read minds and transcribe thought," and in the future, this could occur at times when the human hasn't given consent, such as when they are asleep.
To curb any of these concerns, the researchers behind the recently published study performed tests with the same brain-reading system on people that hadn't performed long sessions of fMRI scanning. The AI could not transcribe any of their brain activity, proving it's currently incapable of reading the brain activity of anyone it hasn't already been trained on.
"Our mind has so far been the guardian of our privacy. This discovery could be a first step towards compromising that freedom in the future," said bioethicist Rodriguez-Arias Vailhen
In other news about artificial intelligence, a modder has managed to combine OpenAI's language model that powers its immensely popular AI chatbot, ChatGPT, with the classic Bethesda title Skyrim. The modder has enabled NPCs to creatively communicate with players via this new mod.