MSU research delves deeper into how well AI can detect human deception

AI News


Can an AI persona detect when a human is lying? If so, should we trust it? Artificial intelligence (AI) has made many advances recently, and its scope and capabilities continue to evolve. A new study led by Michigan State University delves deeper into how well AI can understand humans by using it to detect human deception.

In a study published in communication journalresearchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to investigate how well AI personas can detect deception and truth from human subjects.

The purpose of this study is to understand how effective AI can be for deception detection and simulation of human data in social science research, and to caution experts when using large-scale language models for lie detection. ”


David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study

To evaluate AI compared to human deception detection, researchers turned to truth-default theory (TDT). TDT suggests that people are mostly honest most of the time and that we tend to believe that others are telling the truth. This theory has helped researchers compare how AI behaves to how humans behave in the same types of situations.

“Humans have an innate truth bias, and we typically assume others are being honest, regardless of whether they actually are,” Markowitz says. “This tendency is thought to be evolutionarily beneficial, because constant doubt for everyone takes a lot of effort, makes everyday life difficult, and puts a strain on relationships.”

To analyze the AI ​​persona’s decisions, the researchers used the Viewpoints AI research platform to assign human audiovisual or audio-only media to be judged by the AI. The AI ​​judge was asked to decide whether the human test subject was lying or telling the truth and provide evidence for their decision. A variety of variables were evaluated to see how the AI’s detection accuracy is affected, including media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-to-truth base rate (the proportion of honest versus deceptive communication), and the AI’s persona (an identity created to act and speak like a real human).

For example, one study found that AI is biased towards lies, as it is much more accurate at lying (85.8%) compared to truth (19.5%). In a short interrogation environment, the AI’s deception accuracy was comparable to humans. However, in situations other than interrogation (such as when evaluating statements about friends), the AI ​​exhibited truth bias and more accurately matched human performance. In general, the results showed that AI is more prone to lies than humans and is far less accurate.

“Our main goal was to see what we could learn about AI by including it as a participant in a deception detection experiment. In this study and in the model we used, we found that AI was sensitive to context. However, this did not improve its ability to detect lies,” Markowitz said.

The final findings suggest that AI results do not match human results or accuracy, and that human nature may be an important limit or boundary condition to how deception detection theory is applied. This study highlights that while using AI for detection may seem unbiased, the industry needs to make significant progress before generative AI can be used for deception detection.

“It’s easy to see why people would want to use AI to detect lies. This seems like a high-tech, potentially fair, and probably fair solution. But our research shows we’re not there yet,” Markowitz said. “Both researchers and experts will need to make significant improvements before AI can truly handle deception detection.”

sauce:

michigan state university

Reference magazines:

Markowitz, D. M., & Levine, T. R. (2025). (In)effectiveness of AI personas in deception detection experiments. communication journal. doi.org/10.1093/joc/jqaf034



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *