Good news for the Thought Police?Researchers develop AI that converts thoughts into text

AI News


in the novel 1984, Winston Smith knew better than to speak ill of Big Brother out loud, but at least he understood that what he was thinking in silence was beyond the Thought Police’s ability to monitor. In the future, it may not always be true. A team of researchers at the University of Texas at Austin has developed a new artificial intelligence (AI) system that can translate human brain activity into sentence streams.

This system, called a semantic decoder, is far from a surveillance tool, but it can be a boon to people who are mentally aware but unable to speak. For now, the technology only works for collaborators in the lab, but researchers, Ph.D. student Jerry Tan and assistant professor Alex Huth, speak for conditions such as stroke. It says it may help people who are unable to do so but are still mentally conscious. By analyzing brain activity using an fMRI scanner, a semantic decoder generates corresponding text without the need for surgical implants or prescribed word lists.

Their research was published in the journal Nature Neuroscience.

This image shows the decoder’s predictions from brain recordings collected while the user listened to four stories. Sample segments are manually selected and annotated to demonstrate typical decoder behavior. The decoder accurately reproduces some words and phrases and captures much more. Credit: University of Texas at Austin.

Tang and Huth used a transformer model similar to that used for Google’s Bard and Open AI’s ChatGPT. Unlike other linguistic decoding systems, semantic decoders can decode continuous language containing complex ideas, not just single words or short sentences.

A semantic decoder captures the essence of what was said or thought rather than providing a word-by-word transcript. The researchers designed it to imperfectly capture participants’ thoughts and word meanings. However, the accuracy of the system is very high, producing text that closely matches the intended meaning of the original word about 50% of the time. For example, if a participant heard the speaker say, “I don’t have a driver’s license yet,” the decoder translated her thought to, “She hasn’t started learning to drive yet.” When another participant heard, “I didn’t know whether to scream, cry, or run away. Instead, I said, ‘Leave me alone! ‘”, Decoder said. I started screaming and crying and she just said, ‘Told me to leave me alone’.

The researchers were aware of the potential misuse of this technique and addressed ethical concerns in their studies. They said the decryption system only works with cooperative participants who actively participate in decryptor training. They also reported that the system was unusable when used on individuals who had not been trained on decoders, or when trained participants resisted, such as thinking of other ideas. is used only when needed, emphasizing that we are working to be able to help individuals.

Semantic decoders are currently not practical for use outside the laboratory, but the researchers believe they can be transferred to other portable brain-imaging systems such as functional near-infrared spectroscopy (fNIRS). “fNIRS measures where blood flow is more or less in the brain at different times, which turned out to be exactly the same kind of signal that fMRI measures,” says Huth. said Mr. “Thus, our exact approach should translate to his fNIRS,” albeit with lower resolution for fNIRS.

The researchers also asked participants to watch four short silent videos inside the scanner. A semantic decoder used brain activity to accurately describe specific events in the video. Development of this AI system was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund.

Co-authors on this study include Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin. Huth and Tang have filed his PCT patent application related to this work.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *