‘Raw’ data shows AI signals reflect how the brain listens and learns

AI News


Illustration of a brain overlaid with noise waves

Researchers at the University of California, Berkeley measured the brain waves of participants and an artificial intelligence system. This comparison, they say, provides a window into what can be considered an AI black box. (Photo credit: iStock)

New research from the University of California, Berkeley shows that artificial intelligence (AI) systems can process signals in a way very similar to how the brain interprets speech. The discovery may help explain the black box of how AI systems work, scientists say. .

Using a system of electrodes placed on participants’ heads, scientists Berkeley Speech Computing Lab EEGs were measured when participants heard the single syllable ‘bar’. They then compared that brain activity to signals generated by an AI system trained to learn English.

“The shapes are very similar,” says Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author of a study recently published in the journal Scientific Reports. “This shows that similar things are encoded and processed similarly.”

A side-by-side comparison graph of the two signals clearly shows the similarity.

“There are no tweaks in the data,” added Begus. “This is raw.”

A person in a white shirt smiles at the camera

Guspar Vegas, assistant professor of linguistics at the University of California, Berkeley, said: (Photo by Sue Brown)

AI systems have come a long way in recent years. Since ChatGPT took the world by storm last year, these tools have been predicted to upend many sectors of society and revolutionize the way millions of people work. Despite these remarkable advances, however, scientists had only a limited understanding of how the tools they created work between inputs and outputs.

ChatGPT Q&A has become a benchmark for measuring the intelligence and bias of AI systems. But what happens between these steps is like a black box. Knowing how and why these systems provide information—how they learn—will become essential as it permeates everyday life in areas ranging from healthcare to education.

Begus and his co-authors Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington are among a cadre of scientists working to crack that box.

To do so, Vegas turned to linguistics training.

When you hear spoken words, the sounds enter your ears and are converted into electrical signals, according to Vegas. These signals travel through the brainstem to the outer parts of the brain. In electrode experiments, the researchers found that a single sound followed its path when he repeated it 3,000 times, and that the brain waves for speech closely followed the actual sounds of language.

The researchers sent recordings of the same “bar” sound through an unsupervised neural network (AI system) that could interpret the sound. Using techniques developed at Berkeley’s Speech Computing Lab, they measured the matching waves and documented them as they occurred.

Previous studies required additional steps to compare waves from brains and machines. Studying waves in their raw form will help researchers understand and improve how these systems learn and increasingly mirror human cognition.

“As a scientist, I am very interested in the interpretability of these models,” said Begus. “They are so powerful. Everyone is talking about them. And everyone is using them. But very few attempts have been made to understand them.”

Two oscillating lines showing waves from sound.  One is interpreted by humans and the other by AI systems. The graphic shows the similarity of the two patterns.

Researchers have found strikingly similar signals between the brain and artificial neural networks. The blue line is the brain waves when humans are listening to vowels. Red is the artificial neural network’s response to the exact same vowel. Two signals are raw. In other words, no conversion was required. (Photo credit: Gasper Begus)

Begus believes that what happens between input and output need not remain a black box. Understanding how these signals compare to human brain activity will be an important benchmark in the race to build increasingly powerful systems. So does knowing what’s going on under the hood.

For example, that understanding could help set guardrails for increasingly powerful AI models. It can also help us better understand how errors and biases are incorporated into the learning process.

Begus said he and his colleagues are working with other researchers using brain-imaging techniques to measure how these signals compare. They also study how other languages, such as Mandarin, are decoded differently in the brain and what that tells us about knowledge.

Many models are trained with visual cues such as colors or written text. Both have thousands of variations at the finer level. But language opens the door to more certain understanding, he said, Begus.

For example, English has only a few dozen sounds.

“If you want to understand these models, you have to start simple, and they are much easier to understand in spoken language,” says Begus. “We hope speech will help us understand how these models are learning.”

One of the main goals in cognitive science is to build mathematical models that are as human-like as possible. The similarities between the newly documented brain waves and her AI waves are a benchmark of how close the researchers are to that goal.

“I’m not saying we need to create something like humans,” Vegas said. “I’m not saying they don’t. But it’s important to understand how different architectures resemble or differ from humans.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *