Scientists announced Monday that they have discovered a way to transcribe the “gist” of what people think using brain scans and artificial intelligence modeling.
The main purpose of language decoders is to help people who have lost their ability to communicate, but US scientists have admitted the technology has raised questions about “mental privacy.”
To alleviate such fears, tests were performed to show that decoders could not be used in people who did not allow their brain activity to be trained for long periods of time inside a functional magnetic resonance imaging (fMRI) scanner. Did.
Previous studies have shown that brain implants can enable people who can no longer speak or type to spell words and sentences.
These “brain-computer interfaces” focus on the part of the brain that controls the mouth when trying to form words.
Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of the new study, says his team’s language decoders “operate on a very different level.”
“Our system really works at the level of ideas, semantics and meaning,” Huth said at an online press conference.
According to research in the journal Nature Neuroscience, this is the first system capable of reconstructing serial language without invasive brain implants.
-“Deeper than language”-
In the study, three people spent a total of 16 hours listening to narratives, mostly podcasts such as The New York Times’ Modern Love, inside an fMRI machine.
This allowed researchers to map how words, phrases, and meanings elicit responses in areas of the brain known to process language.
This data was input into a neural network language model using GPT-1. GPT-1 is the predecessor of the AI technology that was later introduced into the wildly popular ChatGPT.
The model was trained to predict how each person’s brain would respond to recognized speech, narrowing down the options until it found the closest response.
To test the model’s accuracy, each participant heard a new story on the fMRI machine.
Jerry Tang, lead author of the study, said the decoder “can recover the gist of what the user was hearing.”
For example, when a participant heard the phrase “I don’t have a driver’s license yet,” the model responded, “She hasn’t started learning to drive yet.”
The decoder struggled with personal pronouns such as “I” and “she,” researchers admitted.
But even if participants came up with their own story or watched a silent film, the decoder was still able to get the “gist”.
This shows that we are “decoding something deeper than language and translating it into language”.
fMRI scans are too slow to capture individual words, so they collect “a jumbled, multi-second collection of information,” Huth said.
“So even if the exact words are lost, we can still see how the ideas evolve.”
– Ethical Warning –
David Rodriguez-Arias Vailhen, a bioethics professor at the University of Granada in Spain, who was not involved in the study, said it surpasses what has been achieved so far with brain-computer interfaces.
This could bring us closer to a future where machines “can read minds and transcribe thoughts,” he said, warning that this could be done against human will, such as during sleep.
Researchers expected such concerns.
They performed tests that showed that decoders do not work on people if they have not already been trained on their own specific brain activity.
Three participants were also able to easily disable the decoder.
While listening to one of the podcasts, users were told to count to seven, name and imagine animals, or tell another story in their minds. Researchers said it “interfered.”
Next, the team hopes to speed up the process so that brain scans can be decoded in real time.
They also called for regulations to protect spiritual privacy.
“So far, our minds have protected our privacy,” said bioethicist Rodriguez Arias Baichen.
“This discovery could be the first step toward compromising future freedom.”