summary: A new study finds that more than 60% of participants would prefer AI over humans to make redistribution decisions, even though they perceive AI decisions as less satisfying and fair. Researchers conducted an online experiment with more than 200 participants in the UK and Germany.
The study highlights the need for transparency and accountability in AI decision-making, and findings suggest that improving algorithmic consistency could increase public acceptability of AI in moral decision-making situations.
Key Facts:
- AI Preferences: 60% of participants preferred AI for making redistribution decisions.
- Perceived fairness: Participants rated the AI's decisions as less satisfying and less fair.
- Transparency required: Transparency and accountability are crucial to the acceptance of AI.
sauce: University of Portsmouth
A new study finds that people prefer artificial intelligence (AI) over humans when it comes to redistribution decisions.
As technology continues to be integrated into various aspects of public and private decision-making, understanding public perceptions and satisfaction, and ensuring transparency and accountability of algorithms will be key to their acceptance and effectiveness.

The study, conducted by researchers from the University of Portsmouth and the Max Planck Institute for Innovation and Competition, explored public attitudes towards algorithmic versus human decision-making, and examined how potential discrimination affects these preferences.
To explore the preference for a human or AI decision maker, an online decision-making experiment was conducted in which two people were asked to redistribute their income between themselves after performing a series of tasks.
More than 200 participants from the UK and Germany were asked to vote whether they would rather have a human or an algorithm (AI) decide how much money they earn.
In contrast to previous findings, over 60% of participants chose an AI over a human to decide how to redistribute revenue. Participants supported the algorithm regardless of potential discrimination. This preference challenges the traditional notion that human decision makers are favored in decisions that include “moral” components such as fairness.
However, despite the preference for algorithms, when evaluating the decisions made, participants were less satisfied with the AI decisions and perceived them as less “fair” than decisions made by humans.
The subjective evaluation of the decisions was driven primarily by participants’ own material interests and ideals of fairness: although participants tolerated reasonable deviations between the actual decisions and their ideals, they reacted very strongly and negatively to redistributive decisions that did not correspond to any of the established principles of fairness.
Dr Wolfgang Luhan, Associate Professor of Behavioural Economics in the School of Accounting, Economics and Finance at the University of Portsmouth and corresponding author of the study, said: “Our research suggests that while people are open to the idea of algorithmic decision-making, particularly because of the potential for unbiased decisions, the actual performance and the ability to explain how decisions are made play a crucial role in acceptance.”
“Algorithmic transparency and accountability are crucial, especially in the context of moral decision-making.
“Many companies are already using AI in hiring decisions and compensation plans, and public agencies are adopting it in policing and parole strategies. Our findings suggest that greater algorithmic consistency may lead the public to increasingly support algorithmic decision makers, even in morally sensitive areas.”
“If the right AI approach is adopted, it could actually improve the acceptance of policies and management choices, such as salary increases or bonus payments.”
About this AI research news
author: Glenn Harris
sauce: University of Portsmouth
contact: Glenn Harris – University of Portsmouth
image: Image courtesy of Neuroscience News
Original Research: Open access.
“Ruled by the Robots: Understanding Algorithmic Decision Makers' Preferences and Their Perceptions of Their Choices,” by Wolfgang Luhan et al. Public Choice
Abstract
Ruled by the robots: Preferences for algorithmic decision makers and perceptions of their choices
As technology-enabled decision-making becomes more prevalent, it is important to understand how the algorithmic nature of the decision-maker influences how the decision is perceived by those affected.
We use online experiments to investigate preferences for human or algorithmic decision makers in redistributive decisions, and in particular, whether algorithmic decision makers are preferred for their fairness.
Contrary to previous findings, the majority of participants (over 60%) prefer algorithms over humans as decision makers, but this is not due to concerns about biased decisions.
However, despite these preferences, decisions made by humans are evaluated more favorably: the subjective evaluation of the decision is driven primarily by the participants' own material interests and ideals of fairness.
Participants tolerate explainable deviations between actual decisions and their ideals, but react very strongly and negatively to redistributive decisions that are inconsistent with fairness principles.