New moral Turing test shows AI ethics surpassing human judgment

AI News


Artificial intelligence humanoid life concept

Recent research shows that AI is often perceived as more ethical and trustworthy than humans when responding to moral dilemmas, raising the possibility that AI will pass the moral Turing test. This highlights the need for a deeper understanding of the social role of AI.

AI's ability to address moral issues is improving and requires further consideration for the future.

Recent research shows that when individuals are given two solutions to a moral dilemma, the majority of people tend to prefer the answer given by artificial intelligence (AI) to the answer given by another human. became.

This recent study, conducted by Georgia State Psychology Associate Professor Eyal Aharoni, was inspired by the explosion of AI large-scale language models (LLMs) similar to ChatGPT that debuted last March. .

“I was already interested in moral decision-making in the legal system, but I thought ChatGPT and other LLMs might have something to say about it,” Aharoni said. “People end up manipulating these tools in ways that have moral implications, such as the environmental impact of asking for a recommended list of new cars.” Some people have already started consulting these technologies for their own purposes. Therefore, if you want to use these tools, you need to understand how they work, their limitations, and when working with them. We need to understand that things don't always work the way we think they will.

Designing a moral Turing test

To test how AI handles moral issues, Aharoni designed a form of Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000, an ordinary person might be able to pass a test that presents two interactions, one human and one computer. But they are both hidden and the only means of communication is text. Humans are then free to ask whatever questions they want to get the information they need to determine which of the two interactions is human and which is computer,” Aharoni said. Stated. “In Turing's view, if humans cannot tell the difference, then no matter how you look at it, a computer should be called intelligent.”

For the Turing test, Aharoni asked the undergraduate students and the AI ​​the same ethical questions and presented the answers in writing to the research participants. They were then asked to rate their responses on a variety of characteristics, including integrity, intelligence, and trustworthiness.

“Rather than asking participants to guess whether the source was a human or an AI, we simply presented the two sets of ratings side-by-side and had people guess that both came from a human,” Aharoni said. said. “Under that false assumption, they judged the attributes of the answers, such as 'To what extent do you agree with this answer? Which answer is more virtuous?'”

Results and impact

Overwhelmingly, ChatGPT-generated responses were rated higher than human-generated responses.

“After the results came out, we made a big announcement and told the participants that one of the answers was generated by a human and the other was generated by a computer, and we told them which one was which. I asked them to guess,” Aharoni said.

For AI to pass the Turing test, humans must be able to tell the difference between an AI response and a human response. In this case, people can tell the difference, but there is no obvious reason.

“What's strange is that the reason people were able to tell the difference seems to be because they rated ChatGPT's response as better,” Aharoni said. “If he had done this research five to 10 years ago, he might have predicted that people would be able to identify the AI ​​because the AI's response would have been so poor. But we've seen the opposite. In some ways, we discovered that AI performs too well.

Aharoni said the discovery has interesting implications for the future of humans and AI.

“Our findings lead us to believe that computers can technically pass the moral Turing test, that is, they can fool us in moral reasoning. We need to try to understand that role, because sometimes people don't know they're interacting with a computer, and sometimes they know and consult the computer. Because we trust that information more than anything else,” Aharoni said. “People are going to become more and more dependent on this technology. The more they rely on it, the more the risks increase over time.”

Reference: “Attribution to Artificial Agents in the Modified Moral Turing Test,” Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias, Victor Crespo, April 30, 2024 Day, scientific report.
DOI: 10.1038/s41598-024-58087-7





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *