Machine learning model reveals how bras work

Machine Learning


Schematic configuration of the proposed speech processing network in the brain

Image: Loud sound input passes through a network of excitatory and inhibitory neurons in the auditory cortex. These neurons clean up the signal (partly guided by the attentive listener) and detect sound signatures, allowing the brain to perceive changes in the communication sound regardless. How it is voiced by speakers and ambient noise.
opinion more

Credit: Manaswini Kar

Pittsburgh, May 2, 2023 — In a paper published today communication biologyan auditory neuroscientist at the University of Pittsburgh, describes a machine learning model that helps explain how the brain perceives meaning in communication sounds such as animal calls and spoken words.

The algorithms described in this study show that social animals, such as marmosets and guinea pigs, use speech processing networks in their brains to distinguish between categories of sounds such as mating, food, and danger, and act accordingly. We model how to

This study is an important step towards understanding the intricacies and intricacies of neuronal processing that underlie sound recognition. Insights from this research pave the way for understanding and ultimately treating disorders affecting speech recognition and improving hearing aids.

“More or less everyone we know loses hearing at some point in their lives as a result of aging or exposure to noise. Understanding the biology of speech recognition and finding ways to improve it.” “But the process of vocal communication is fascinating in and of itself. The way our brains interact with each other, receive ideas, and convey them in sound is just magical.”

Humans and animals encounter an amazing variety of sounds every day, from the cacophony of the jungle to the hum of a busy restaurant. Regardless of the noise pollution of the world around us, humans and other animals are able to communicate and understand each other, including pitch and accent. For example, when we hear the word “Hello,” we recognize its meaning regardless of whether it is American or British, whether the speaker is a woman or a man, whether we are in a quiet room or busy. intersection.

The team started with an intuition that the way the human brain perceives and perceives meaning in communication sounds may be similar to how it perceives faces compared to other objects. Faces are very diverse, but they have some common features.

Instead of matching every face we encounter to a perfect “template” face, our brain picks up useful features such as the eyes, nose, mouth, and their relative positions, and uses these Create a mental map of the small features of face.

In a series of studies, the team showed that communication sounds may also consist of such small features. Researchers first built a machine learning model of sound processing to recognize different sounds made by social animals. To test whether brain responses match our model, we recorded brain activity when guinea pigs listened to kin communication sounds. Neurons in areas of the brain responsible for processing sounds lit up with a flurry of electrical activity when they heard noises with signatures present in certain types of these sounds, similar to machine learning models.

Next, we wanted to check the model’s performance against real-world behavior in animals.

Guinea pigs were placed in enclosures and exposed to different categories of sounds. The researchers then trained the guinea pigs to walk to different corners of the enclosure and receive fruit rewards depending on the category of sounds played.

Next, they made the task more difficult. Execute calls, speed up and down, pitch up and down, or add noise and echo.

Not only were the animals consistently able to perform the task as if the calls they heard were unaltered, they continued to perform well despite artificial echoes and noise. fully explained their behavior (and the underlying activation of sound-processing neurons in the brain).

As a next step, researchers are translating the model’s accuracy from animal to human speech.

“From an engineering standpoint, there are much better speech recognition models out there. What is unique about our model is that it maps closely to behavior and brain activity, giving us more insight into biology.” In the future, these insights may help us help people with neurodevelopmental disorders and design better hearing aids,” said Pitt, lead author of Neurobiology. said Dr. Satyabrata Parida, a postdoctoral fellow in the department.

Manaswini Kar, a student in Sadagopan’s lab, said, “Many people suffer from speech dyslexia.” By understanding what to do, we will be able to understand and help those who are struggling.”

An additional author for this study is Dr. Shi Tong Liu of Pitt.


Disclaimer: AAAS and EurekAlert! EurekAlert! is not responsible for the accuracy of news releases posted. Use of information by contributors or via the EurekAlert system.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *