Reminding people that human decision-making can be biased may make the use of artificial intelligence seem less problematic, a new study says.
By drawing attention to the limitations of human decision-making, AI may appear more consistent and fair. This could increase pressure on governments from voters to rely more on algorithmic systems, rather than less.
This study shows that when people first think about the limitations of human decision-making, they tend to view AI more favorably by comparison. Conversely, when we consider AI decision-making for the first time, we become more critical of human decision-makers.
Researchers investigated how people assess the risk of discrimination in public sector hiring decisions. They asked people for their opinions on selection processes conducted by AI or human recruiters.
They asked people how likely they were to face discrimination in hiring decisions made by AI systems or human recruiters. Half of respondents evaluated AI first, and half evaluated human recruiters first. This allowed some respondents to think about potential human bias before evaluating AI decisions.
When respondents answer questions about humans first, the potential for human bias in their thinking becomes more pronounced.
The research was conducted by Florian Stoeckel from the University of Exeter, Ben Lyons from the University of Utah, and Adrienn Ujhelyi and Monika Kovacs from ELTE Eötvös Lorand University.
Professor Steckel said: “Evaluation of AI depends not only on the properties of the algorithm, but also on whether people compare it to human decision-making. Once a comparison is made, AI-based decision-making can look not only worse, but also better. This is important for how the public reacts to the use of AI in employment and the public sector more generally.”
“These findings suggest that public concern about bias in AI is not fixed; rather, it depends on the context in which people evaluate algorithmic systems. When the limitations of human decision-making are highlighted in public debate, AI systems may appear more favorable by comparison. The potential problem is that this shift in perception can occur even if the AI systems themselves still contain biases.”
“People seem to be relying on common assumptions about algorithms and computers when judging AI. Discussions can turn towards weaknesses in human decision-making, which can make AI seem more acceptable, even if the AI system itself has not been proven to be fair.”
“Therefore, those who want to increase public acceptance of AI risk emphasizing the shortcomings of human decision makers rather than demonstrating that a particular AI system actually behaves fairly.When governments integrate AI, trust in these systems should be based on their actual merits and performance, rather than comparisons with human weaknesses.”
“The opposite dynamic is also possible. When people think about AI decision-making for the first time, they may begin to evaluate human decision-makers more critically. As AI becomes a visible alternative, attention may shift to the limits of human decision-making. In such situations, AI may not only seem faster and cheaper, but also potentially more consistent or fair. There could also be increased pressure on governments from the public to rely more, rather than less.”
The YouGov survey was conducted among 11,000 participants in Austria, Germany and Australia.
/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.
