Study finds brain responds differently to AI and human voices

AI News


VIENNA, Austria: Humans aren't very good at distinguishing between human voices and those generated by artificial intelligence (AI), but the human brain responds differently to human and AI voices, according to a study presented today (Tuesday) at the Federation of European Neuroscience Societies (FENS) Forum 2024. [1]

The study was presented by postdoctoral researcher Kristin Sjögstad at the Department of Psychology at the University of Oslo (UiO) in Norway and conducted by Sjögstad and Professor Sascha Frühholz.

Schegustad said: “We already know that AI-generated voices have become so advanced that they are nearly indistinguishable from real human voices. It is now possible to replicate a person's voice from just a few seconds of recording, and scammers are using this technology to impersonate loved ones in need to trick victims into sending them money. Machine learning experts are developing technical solutions to detect AI voices, but little is known about how the human brain responds to these voices.”

The study involved 43 participants who were asked to listen to human- and AI-generated voices expressing five different emotions: neutral, anger, fear, happiness and joy. [2]Participants were asked to identify whether the voices were artificial or natural, while their brains were scanned using functional magnetic resonance imaging (fMRI), which detects changes in blood flow in the brain and shows which parts of the brain are active. Participants were also asked to rate the features of the voices they heard in terms of naturalness, believability and authenticity.

Participants were equally bad at identifying both types of voices, with only 56% being able to correctly identify the human voice and 50.5% being able to correctly identify the AI ​​voice.

People were more likely to correctly recognize “neutral” AI voices as AI (75% compared to 23% who correctly recognized neutral human voices as human), suggesting that people assume neutral voices are more AI-like. Neutral female AI voices were recognized correctly more often than neutral male AI voices. For happy human voices, the correct recognition rate was 78% compared to just 32% for happy AI voices, suggesting that people associate happiness with being more human-like.

Both the AI ​​and neutral human voices were perceived as the least natural and least trustworthy and authentic, while the happy human voice was perceived as the most natural and least trustworthy and authentic.

But when the researchers looked at brain images, they found that the human voice elicited stronger responses in brain regions associated with memory (right hippocampus) and empathy (right inferior frontal gyrus), while the AI ​​voice elicited stronger responses in regions associated with error detection (right anterior midcingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).

Schegustad said: “My research shows that identifying whether a voice is human or AI-generated is not very accurate. Participants also frequently mentioned how difficult it was for them to tell the difference between the voices. This suggests that current AI voice technology mimics human voices in a way that makes it difficult for humans to reliably distinguish them.”

“The results also indicate a perception bias, with neutral voices more likely to be perceived as AI-generated, and happy voices more likely to be perceived as more human, regardless of whether they are actually human. This is especially true for neutral female AI voices, which may be because we are used to female voice assistants such as Siri and Alexa.

“We're not very good at distinguishing between human and AI voices, but there do seem to be differences in how our brains respond. AI voices can make us feel more alert, whereas human voices can make us feel more familiar.”

The researchers now plan to study whether personality traits such as extroversion and empathy increase or decrease a person's sensitivity to picking up differences between human and AI voices.

Professor Richard Roche, Associate Head of Psychology at Maynooth University in Maynooth, County Kildare, Ireland, and Chair of the FENS Forum's Communications Committee, who was not involved in the study, said: “Investigating the brain's responses to AI voice is crucial as this technology continues to advance. This research will help us understand the possible cognitive and social impacts of AI voice technology and may inform the development of policy and ethical guidelines.”

“There are obvious risks that this technology could be used for fraud and deception. But there are also potential benefits, such as providing a voice replacement for people who have lost their natural voice. AI voice could also be used to treat some mental illnesses.”

/Public Release. This material from the originating organization/author may be out of date and has been edited for clarity, style and length. Mirage.News does not take any organizational stance or position and all views, positions and conclusions expressed here are solely those of the authors. Read the full article here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *