Impact of AI on Trust in Human Interactions

AI News


As AI becomes more and more real, it can erode trust in the people you communicate with. Researchers at the University of Gothenburg investigated how advanced AI systems affect our trust in the individuals we interact with.

image

Professor Jonas Ivarsson, Professor Oscar Lindwall

In one scenario, a self-proclaimed imposter believes the caller is an elderly man, but is instead connected to a computer system that communicates via a pre-recorded loop. . Scammers spend a good deal of time trying to defraud, listening patiently to the somewhat confusing and repetitive stories of the “men.” Oskar Lindwall, professor of communication studies at the University of Gothenburg, says it often takes a long time for people to realize they are interacting with technical systems.

In collaboration with Professor of Informatics Jonas Ivarsson, he wrote an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, in which he explores how individuals can identify situations in which one of the parties may be an AI agent. I’m trying to interpret and relate. This article highlights the negative consequences of being suspicious of others, including damaging relationships.

Ivarsson provides an example of a romantic relationship in which trust problems arise and lead to jealousy and an increased propensity to seek evidence of deception. , argues that undue suspicion can arise even in the absence of

Their study found that during an interaction between two humans, some actions are interpreted as indications that one of them is in fact a robot.

Researchers suggest that a broader design perspective is driving the development of AI with increasingly human-like capabilities. While this can be attractive in some situations, it can be problematic, especially when it is unclear who you are communicating with. I’m wondering if I should. AI creates a sense of intimacy and impresses people based solely on their voice.

In the case of scammers calling themselves “older men,” the fraud is only revealed after a long time. Lindwall and Ivarsson attribute this to the believability of human voices and the assumption that disrupted behavior is age-related. When AI speaks, it infers attributes such as gender, age, and socioeconomic background, making it difficult to discern that it is interacting with a computer.

Researchers propose creating an AI with a well-performing eloquent voice that is still clearly synthesized while increasing transparency.

Communicating with others involves not only deception, but also relationship building and joint meaning-making. The uncertainty of whether we are talking to a human or a computer affects this aspect of communication. In some situations, such as cognitive-behavioral therapy, it may not be a problem, but other forms of therapy that require more human connection may be adversely affected.

/Release. This material from the original organization/author may be of a point-in-time nature and has been edited for clarity, style, and length. and do not take a stand. All views, positions and conclusions expressed herein are solely those of the author.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *