Using game shows as a guide, researchers use AI to predict deception

Machine Learning


This article has been reviewed in accordance with Science X's editorial processes and policies. The editors have highlighted the following attributes while ensuring the authenticity of the content:

fact confirmed

trusted sources

proofread


Credit: Pixabay/CC0 Public Domain

× close


Credit: Pixabay/CC0 Public Domain

Researchers at Virginia Commonwealth University used data from a 2002 game show to teach a computer how to tell if someone is lying.

“Human behavior is rich in cues to deception and trust,” said Xunyu Chen, assistant professor in the VCU School of Business' Department of Information Systems. “use [artificial intelligence methods]Information sources such as machine learning and deep learning can be leveraged for decision-making. ”

One of the first papers to quantitatively examine high-stakes deception and trust, “High-stakes trust and deception: Evidence from the 'Friend or Foe' dataset,” is published in the current issue of the journal It was published. Decision support system—Chen and his team are using a new dataset derived from the American game show “Friend or Foe?” It is based on the prisoner's dilemma. This game theory explores how two people can benefit from cooperating, but it is difficult to coordinate and suffers when they cannot.

Laboratory experiments that have been commonly used to study trust and deception have limitations in terms of realism and generalizability. The high-stakes deceptions seen on game shows require more cognitive resources for behavioral management compared to lower-stakes hypothetical cases. Significant rewards or punishments associated with high-stakes decisions may also induce stronger emotional and behavioral fluctuations in cues such as facial, verbal, and behavioral fluctuations.

“We discovered multimodal behavioral indicators of deception and trust in high-stakes decision-making scenarios, which could be used to predict deception with high accuracy,” Chen said. he said. He calls such predictive devices automatic deception detectors.

This study advances our understanding of deception and trust behaviors, which can have significant consequences from a scientific and quantitative perspective. Researchers and practitioners can use the results of this study to analyze human behavior in high-stakes situations such as presidential debates, business negotiations, and courtrooms to predict deception and protection of self-interest. .

For more information:
Xunyu Chen et al., Trust and deception, the stakes: Evidence from friend or foe datasets; Decision support system (2023). DOI: 10.1016/j.dss.2023.113997



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *