Improve transparency about how countries use AI to manage the mobility they need, experts encourage

Applications of AI


Worldwide Software

Credit: Pixabay/CC0 Public Domain

New research shows that increasing transparency about how they use AI to use AI to boost trust and strengthen the rule of law is necessary to increase trust and strengthen trust.

Experts warn that overuse of AI in immigration management can perpetuate bias and error, promote excessive reliance on technology, and undermine trust in decision-making processes. Appropriate cybersecurity measures are also necessary to protect sensitive data about vulnerable immigrants.

However, using AI for immigration management can present opportunities such as freeing caseworker time, when potential risks are properly identified and taken in responsible ways that avoid or mitigate.

A study by Professor Anna Bedusi of the University of Exeter highlights the importance of improving how countries use AI for migration and compliance with international human rights law.

The state must ensure that AI is used responsibly and in a way that respects immigrant rights and dignity through various stages of the immigration process.

Governments use AI technologies, including generated AI, to streamline workloads and improve efficiency in mobility operations.

However, not all countries publicly acknowledge how AI is used in international migration management.

The study states that countries should publicly allow the use of AI without necessarily revealing sensitive details that could damage national security or personal information. This includes information on which AI systems are being used, what purposes they are being used, whether they contain human opinions and support, and how much.

Professor Beduschi said, “Improved transparency will help increase acceptance of AI people in public services. Transparency could lead to better accountability and ensure that decisions are justified and in line with the rule of law.

When regulating migration using AI, states must comply with international human rights laws, including rules regarding privacy rights and guarantees of non-discrimination.

Professor Beduschi has generated a risk matrix that can be used to identify, prioritize, avoid and mitigate risks. This framework will help states responsibly use AI in international transitions.

We encourage an active and thorough assessment of whether AI systems, including generator AI, could potentially cause or exacerbate the existing situation of immigrants and their communities.

detail:
Responsible Artificial Intelligence in International Migration Management: Legal and Practical
Considerations: ore.exeter.ac.uk/repository/bi…871/141203/beduschi-%20iom%20mpp-%20june%202025.pdf? sequence=1 & isallowed=y

Provided by Exeter University

Quote: Improved transparency on how countries use AI to manage the mobility they need, experts retrieved on July 23, 2025 from https://phys.org/news/2025-07-transparency-countres-ai-migration-experts.html (July 17, 2025)

This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *