Automatically disadvantage? What benefits do recipients think of AI use in welfare decisions?

Applications of AI


Automatically disadvantage? What benefits do recipients think of AI use in welfare decisions?

Algorithms in government: Using AI systems to approve social benefits promises more speed and efficiency. But are these systems accepted by everyone? Credit: MPI for Human Development

The use of artificial intelligence (AI) in government is increasing worldwide, including the allocation of social services such as unemployment benefits, housing benefits and social welfare. However, an international team of researchers at the Max Planck Institute for Human Development and the Toulouse School of Economics show that those who rely on such benefits are most skeptical of automated decisions. To gain trust and acceptance in AI-supported systems, one must consider the perspectives of those affected.

A few years ago, the city of Amsterdam piloted an AI program called Smart Check, designed to identify potential cases of welfare fraud. Instead of randomly reviewing the application, the system sifted through numerous data points from local government records, including address, family composition, income, assets, and previous welfare claims, to assign a “risk score.”

Applications considered “high risk” were labelled as research worthy and forwarded to administrative staff for additional scrutiny. However, in practice, this process disproportionately flags vulnerable groups, including immigrants, women and parents, and often does not provide applicants with clear reasons or effective routes to challenge the suspicion.

With increasing criticism from advocacy groups, legal scholars and researchers, cities began to suspend programs earlier this year, and recent reviews have confirmed the system's important shortcomings.

This case highlights a central dilemma in the use of AI in welfare management. Such systems promise greater efficiency and faster decisions, but also run the risk of strengthening bias, eroding trust, and burdening disproportionately vulnerable groups. Against this background, researchers began to investigate how directly affected individuals perceive the increased role of AI in the distribution of social benefits.

In research published in Natural Communicationresearchers from the Max Planck Human Development Institute and the Toulouse School of Economics conducted three large-scale surveys with over 3,200 participants in the US and the UK to investigate how people feel about the use of AI in social benefits allocation.

The research focused on realistic dilemmas. Even if this means an increase in the rate of unfair rejection, are people willing to accept faster decisions by machines? An important finding was that while many citizens are willing to accept minor losses of accuracy in favor of shorter waiting times, recipients of social benefits have very large reservations on decisions supported by AI.

“There are risky assumptions in policy decisions that the average opinion represents the reality of all stakeholders,” explains Mengchen Dong, a research scientist at the Human and Mechanical Center at the Max Planck Institute for Human Development, which deals with ethical issues surrounding the use of AI.

In fact, this study reveals clear disparities. Social welfare recipients reject AI-supported decisions more frequently than incompatible, even if the system promises faster processing.

Another problem is that incompatibility systematically overestimates how affected people trust AI. This is true even when financially rewarding by realistically assessing the perspectives of other groups. Therefore, vulnerable groups understand the majority of perspectives better than their own.

Methodology: Simulated decision-making dilemma and perspective shift

The researchers presented participants with realistic decision-making scenarios. They can choose to process the processing (faster decisions by AI) by human managers with longer latency (e.g., 8 weeks).

Participants were asked to decide which options they preferred from their own perspective or as part of a targeted perspective change asked to place themselves in shoes of other groups (recipients of merit or non-recipients).

Although the US sample represented the population (approximately 20% of respondents were currently receiving social benefits), the UK study specifically aimed at a 50/50 ratio between recipients of universal credit and non-acceptance rates, the social benefits of low-income households. This allowed us to systematically record differences between groups. Demographic factors such as age, gender, education, income, and political orientation were also taken into consideration.

What are the benefits of changing perspectives? And does the right to objection help?

The UK substudy also tested whether financial incentives could improve their ability to adopt realistic perspectives. Participants received a bonus payment if the ratings of other groups approached their actual opinions. Despite the incentives, systematic misjudgments persisted, especially among those without benefits.

Other attempts to strengthen trust in AI have also had limited success. Some participants were informed that the system provided hypothetical possibilities to appeal AI decisions to human administrators. This information has increased slightly, but it rarely altered the basic assessment of AI use.

The outcome of trust in government and management

Research shows that acceptance of AI in the social welfare system is closely linked to trust in government agencies. The more people resent AI in making welfare decisions, the less they trust the government that uses it. This applies to both recipients and incompatibles.

In this study, in the UK, which examined the planned use of AI in the allocation of universal credit, many participants stated that human case workers prefer AI over AI, even if the performance in terms of speed and accuracy is the same. References to possible appeal processes rarely altered this.

Seek participatory development of AI systems

Researchers warn against the development of AI systems for the allocation of social benefits according to the will of the majority or based on aggregated data. “If the vulnerable group perspective is not actively considered, there is a risk of false decisions with actual consequences, such as unfair benefits and false accusations,” says Jean-François Bonnefon, director of Economics' social and behavioral sciences.

Therefore, the author's team calls for a redirection of the development of public AI systems towards a participatory process that explicitly includes the perspectives of vulnerable groups, moving away from purely technical efficiency metrics. Otherwise, there is a risk of unwanted development that will undermine trust in management and technology in the long run.

Building on this work in the US and the UK, ongoing collaboration will leverage Danish infrastructure statistics to attract vulnerable Danish groups and uncover unique perspectives on broader administrative decisions.

detail:
Mengchen Dong et al., heterogeneous preferences and asymmetric insights for AI use between welfare claimants and non-cramants; Natural Communication (2025). doi:10.1038/s41467-025-62440-3

Provided by Max Planck Society

Quote:An automatic disadvantage? What Recipients Think about AI Use in Welfare Decisions Obtained from September 30, 2025 https://phys.org/news/2025-09-automately-disadvantaged-disadventaged-venefit-recipers-ai.html (September 29, 2025)

This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *