Home Office’s use of AI in asylum cases is likely to be illegal, legal opinion finds

Applications of AI


A legal opinion published today says the Home Office’s failure to notify asylum seekers that AI tools were being used to assess them is likely to be unlawful. The Home Office’s use of AI tools was found to not meet a number of legal obligations and standards set out in the UK Government’s AI Handbook.

The Opinion is written by Robin Allen KC and Dee Masters of Cloisters Chambers;
Joshua Jackson of Doughty Street Chambers paves the way for a legal challenge by asylum seekers who believe AI is being used to determine whether they are granted protection in the UK.

legal opinion

Government use of artificial intelligence tools: A case study of the Home Office’s asylum practice.

learn more

The Government has confirmed that the Home Office is using AI to summarize both asylum interview records and internal policy documents.

of Asylum Case Summary (ACS) Tool Use ChatGPT-4 to summarize asylum interview notes into a concise summary document.

The Asylum Policy Search (APS) tool summarizes country policy and information notes (CPINs), guidance documents, and country of origin information (COI) reports.

As the legal opinion states, both of these tools “create new text for consideration by decision makers, rather than simply indexing and organizing existing source information.”

Asylum seekers are not informed that AI is being used in their applications. The opinion states that as a matter of procedural fairness, this is “likely to be illegal.” Additionally, if an ACS tool generates an inaccurate summary of an applicant’s personal data and the applicant does not have the opportunity to correct it, it may violate data protection laws.

The Home Office’s own ACS assessment found that 9% of AI-generated summaries were so flawed that they should be removed from pilots. 5% of APS users said they were not confident in the accuracy of the tool.

“Given the obvious inaccuracies in the summaries prepared by APS and ACS, there is a significant risk that decisions based on those summaries will be based on material factual errors and thereby be invalidated,” the opinion said.

This opinion paper highlights the importance of the Government’s own detailed and careful guidelines in the UK Government AI Handbook. The report notes that the Home Office failed to follow the Handbook’s principles and established procedural safeguards, particularly with regard to transparency and the obligation to be open and collaborative when introducing AI.

These tools may not meet public sector equality obligations. This duty requires public authorities to consider how their policies affect people protected by the Equality Act. The Home Office has not published equality impact assessments for either tool, so we cannot know whether this has been met or whether there are wider equality issues.

.

Robin Allen KC and Dee Masters from Cloisters Chambers said:

“To be lawful, the use of AI requires great care. The public has a right to expect the Home Office to apply the AI ​​Handbook for the UK Government with care, particularly on sensitive issues such as asylum claims, and so do applicants. Our opinion highlights the legal risks if this does not happen.”

“Technology can support decision-making, but it cannot undermine the careful human judgment required when applying for asylum. If AI tools are used without appropriate safeguards, there is a real risk that illegal or unfair decisions could result.”

“If AI tools are to influence asylum decisions, there must be full transparency about how those systems work and how their output is used. Without that transparency, it becomes very difficult to ensure that decisions affecting fundamental rights are lawful and fair.”

Digital justice for immigrants



Source link