- Scientists have developed a non-invasive AI system focused on converting human brain activity into a text stream.
- This system, called a semantic decoder, may ultimately help patients who have lost the ability to physically communicate.
- Once the AI system is trained, it can generate streams of text as participants hear or imagine new stories.
Alex Huth (left), Shailee Jain (middle), and Jerry Tang (right) prepare to collect brain activity data at the University of Texas at Austin’s Biomedical Imaging Center. Researchers trained a semantic decoder based on dozens of hours of brain activity data from participants collected on an fMRI scanner.
Photo: Nolan Zunk/University of Texas at Austin.
Scientists have developed a non-invasive AI focused on translating human brain activity into sequences of text, according to a peer-reviewed study published Monday in the journal Nature Neuroscience. developed the system.
This system, called a semantic decoder, could ultimately help patients who have suffered a stroke, paralysis, or other degenerative disease and lost their ability to physically communicate.
Researchers at the University of Texas at Austin partially developed the system using a Transformer model similar to the one that supports Google’s Bard chatbot and OpenAI’s ChatGPT chatbot.
Participants in the study trained the decoder by listening to hours of podcasts inside an fMRI scanner, a large machine that measures brain activity. This system does not require any kind of surgical implant.
PhD student Jerry Tang prepares to collect brain activity data at the University of Texas at Austin’s Center for Biomedical Imaging.
Photo: Nolan Zunk/University of Texas at Austin.
Once the AI system is trained, it can generate streams of text as participants listen to or imagine new stories. The resulting text is not an exact transcript, but was designed by researchers to capture common thoughts and ideas.
According to the release, the trained system produces text that closely or exactly matches the intended meaning of the participant’s original word about half the time.
For example, if a participant heard the words “I don’t have a driver’s license yet” during the experiment, the thought was translated to “She hasn’t started learning to drive yet.”
“For non-invasive methods, this is a real leap in comparison to what has been done before, usually single words or short sentences,” said Alexander Huth, one of the study’s leaders. “We have obtained a model for long-time decoding of continuous language with complex ideas,” he said in a release.
Participants were also asked to watch four videos without sound within the scanner, and the AI system was able to accurately describe “specific events” from them, the release said.
As of Monday, this decoder relies on an fMRI scanner and cannot be used outside of a laboratory setting. But researchers believe it could eventually be used via a more portable brain-imaging system, according to a statement.
The principal investigator of this study has filed a PCT patent application for this technology.
