A new study finds that people prefer artificial intelligence (AI) over humans when it comes to redistribution decisions.
As technology continues to be integrated into various aspects of public and private decision-making, understanding public perceptions and satisfaction, and ensuring transparency and accountability of algorithms will be key to their acceptance and effectiveness.
The study, conducted by researchers from the University of Portsmouth and the Max Planck Institute for Innovation and Competition, explored public attitudes towards algorithmic versus human decision-making, and examined how potential discrimination affects these preferences.
To explore preferences for a human or AI decision maker, an online decision-making experiment was conducted in which two people were given the opportunity to redistribute their income between themselves after performing a series of tasks. Over 200 participants in the UK and Germany were asked to vote on whether they would prefer a human or an algorithm (AI) to make the decision on how much income they would receive.
In contrast to previous findings, over 60% of participants chose an AI over a human to decide how to redistribute revenue. Participants supported the algorithm regardless of potential discrimination. This preference challenges the traditional notion that human decision makers are favored in decisions that include “moral” components such as fairness.
However, despite the preference for algorithms, when evaluating the decisions made, participants were less satisfied with the AI decisions and perceived them as less “fair” than decisions made by humans.