Image provided by: UntoldMag
This article Tzion GrumHinako Sugiyama Sobechukwu Uwajeh was first published by UntoldMag on November 25, 2025. An edited version is republished on Global Voices as part of a content sharing agreement. This post is part of Global Voices’ April 2026 Spotlight Series “A human perspective on AI” You can support this coverage by making a donation here.
From grocery shopping to streaming services, schools to workplaces, combat zones to governance, artificial intelligence (AI) is emerging everywhere.
However, as AI becomes more integrated into governance and security, its role in border security and immigration control is rapidly increasing. These technologies often reproduce and reinforce racism, especially through algorithmic bias. This is equally relevant to the US government’s use of so-called “smart borders.”
What happens when we deploy AI to decide who can travel, who can be detained, and who can be removed at borders?
human rights framework
In response to the 2023 meeting with the United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance, the Black Alliance for Just Immigration (BAJI) and the Immigrant Rights Clinic and International Justice Clinic at the University of California, Irvine (UCI) School of Law submitted a report detailing how AI disproportionately harms Black immigrants and immigrants of color and offering suggestions for future change.
A legal framework already exists that governs how states use AI under international human rights law. The most important of these is the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), which the United States ratified in 1994.
ICERD requires countries to: Prevent all forms of racial discrimination (Article 2(1)(a)). Reform policies and laws that perpetuate racial discrimination (Article 2(1)(c)). Guarantee equal treatment under the law (Article 5). Ensure redress for victims (Article 6) and hold private actors accountable (Article 2(1)(d)).
According to these standards, the United States has a legal obligation to ensure that AI does not promote racial inequality.
border surveillance
But in reality, BAJI and the UCI Clinic detail how the U.S. AI border enforcement policy violates many of these rules at every step of the immigration process.
Even before migrants reach land borders, AI systems track their movements. Instead of human patrols, Customs and Border Protection (CBP) is deploying autonomous surveillance towers and drones to identify “objects of interest.”
The rapidly expanding use of surveillance towers and small unmanned aircraft systems (sUAS) on the U.S.-Mexico border raises serious racial equity concerns. First, those under surveillance include many who are fleeing violence, persecution, and even torture, and who have the right to seek protection in the United States under domestic and international law. But with more limited access to formal immigration processes, migrants of color are forced to risk their lives to cross the border.
Second, the use of Anduril Towers, sUAS, and other forms of AI-powered surveillance systems at its borders perpetuates discrimination by marking immigrants as lawbreakers and threats to national security, rather than people seeking safety and security.
Disproportionate surveillance of immigrants of color leads to disproportionately higher death rates for the same groups pushed into more dangerous areas.
CBP claims its new AI-powered system is more responsible and humane than a physical border wall. According to CBP, Smart Border can deter irregular border crossings and improve migrant safety by locating, capturing, and safely deporting migrants lost in deserts and mountains.
However, the data shows that the opposite is true. The increasing adoption of “smart border” technology has resulted in historically high mortality rates among immigrants.
Algorithmic risk scoring
Formal entry routes are also shaped by algorithmic biases. The CBP One app, introduced by the Biden administration to streamline the immigration process, was previously required for all immigration applications and required a selfie to verify applicants. But the system frequently failed to recognize darker skin tones and misidentified black faces 10 to 100 times more often than white faces, according to legal scholar Priya Morley in her book “AI at the Border: Racialized Impacts and Implications.”
Also, the app was inaccessible to many communities. A lack of translation into the major languages spoken by black immigrants created new barriers. CBP One is no longer available, but discussions about its reinstatement continue under the current administration.
Even if migrants pass the first stage, they will face automated targeting systems (ATS), which compile national and international databases to predict who is likely to overstay their visa.
Although risk assessments are common in immigration systems, ATS systems perpetuate existing biases. For example, when Nigeria was added to the list of countries facing increased travel restrictions in 2020, Nigerians were disproportionately flagged as high risk by the ATS.
Officials insist these systems are preventive rather than punitive. But its very design perpetuates structural racism and contradicts America’s commitments under the ICERD.
ICE Enforcement in the United States
Once in the United States, immigrants will encounter further AI discrimination by Immigration and Customs Enforcement (ICE) during detention and domestic enforcement.
ICE uses predictive algorithms, such as the “Hurricane Score,” to determine who is worthy of increased surveillance. There is a lack of transparency regarding the factors that influence hurricane scores. Because the algorithm is provided by BI Incorporated, a private company with strong ties to the prison industry, the government did not have to reveal what factors influence this score.
ICE also uses the Repository for Analytics in a Virtualized Environment (RAVEN) platform to analyze trends and patterns across a suite of data sources to further assess the risks that immigrants may pose in the United States. RAVEn is derived from unbiased local law enforcement data and an international database from offices in 56 countries. Immigrants cannot opt out or even consent to data collection.
The lack of transparency and avenues of redress in these systems has raised serious concerns among rights watchdogs about compliance with ICERD provisions and anti-discrimination regulations.
Decolonizing AI
Finally, under the Immigration Remedies System, U.S. Citizenship and Immigration Services (USCIS) is using AI to classify evidence and detect fraud on applications. Training Model Asylum Text Analytics (ATA) is a system that identifies fraud by reading asylum application text.
In many cases, ATA can be biased against non-English speaking applicants. This is especially true for users who speak more niche languages and translate through the same provider. ATA may exclude users with legitimate claims whose applications contain similar phrases or descriptions to other applications.
Rather than simplify the application process, USCIS uses an AI-powered evidence classifier to “review” millions of pages of evidence, from birth certificates to medical records to photos of USCIS judges. These AI checks can have a negative impact on immigrants who may have special documents and often exacerbate racial discrimination.
BAJI and UCI argue that a decolonial approach to AI is needed to address these harms. They call Cosmo uBuntu, an African philosophical framework rooted in collectivism and shared humanity rather than individualism. This includes voluntary acceptance. ubuntu (Personality) as a “basic value system for participating in the convivial atmosphere on Earth without imposing universality.”
In contrast to the Western-centric, individualistic view of humanity, African cosmology embraces the humanity of all humans.
To truly decolonize AI in collaboration with ICERD, African and diaspora communities must be actively involved in conceptualizing, inventing, innovating, and operating AI systems.
Policy recommendations
Individuals who may be adversely affected by the use of AI must be promptly informed of such decisions and given the option to opt out of AI systems if they wish.
U.S. federal laws governing DHS’s use of AI must prohibit and prevent uses of AI that result in racially discriminatory outcomes or exacerbate systemic racism. Effective anti-discrimination measures, independent oversight of implementation, thorough public disclosure, stakeholder consultation with a diverse population, and access to effective remedies by those adversely affected by DHS’s use of AI should be required.
If the information is expected to be used in the development or deployment of AI by DHS or its vendors, the city’s policy should include an explicit commitment not to share the information with DHS.
Each of these calls carries a clear message. AI systems must not be allowed to be used across any borders until they are free from discrimination and diverse perspectives are meaningfully incorporated into their development and use.
