
Department of Homeland Security (DHS) Secretary Kristi Noem will visit the Custom and Border Patrol Unmanned Technology Operations Center for an industry demonstration, including CEOs of 22 vendors of DHS Component Head and Countermanned Aircraft Systems at Summit Point, WV on July 24, 2025 (DHS Photo by Mikaela McGee)
Following the passage of President Donald Trump's “big beautiful bill,” the US is expected to spend billions more on technology to monitor borders, track immigrants and implement mass detention and deportation programs. A portion of the money will acquire and deploy new AI systems, including facial recognition, social media surveillance, and surveillance towers that utilize database analytics. However, the US previously committed to international law guidelines that require a different view of using these biased AI systems towards “smart borders.”
In response to my encounter with the UN Special Rapporteur on modern forms of racism, racism, xenophobia and related intolerance, the Just Mivirant Clinic (Baji) Black Alliance, UC Irvine (UCI) School Immigration Rights Clinic and International Justice Clinic recently filed law to fill out reports to fill in Black of Migrants special reports for AI to claim pay. The future drawn from international human rights law.
Because AI can involve a wide range of basic human rights, many of the human rights laws apply directly to the way states use AI on people. In the context of racism, the most relevant to them is the international treaty on the exclusion of all forms of racism (ICERD). ICERD sets out negative obligations for states that “will not engage or prevent racist conduct” and, importantly, a positive obligation to mitigate structural racism, particularly to “amend policies, laws and regulations that perpetuate existing racism.” It also requires states to ensure equal treatment before the law, effective remedies and private sector compliance. The United States ratified ICERD in 1994 and bound it under international law to commit to these terms.
Baji and the UCI Clinic details how US policy on the use of AI in border enforcement violates these ICERD obligations throughout the immigration process.
Many migrants are already affected by US use along their travel routes, even before they arrive at the border. Customs and Border Patrol (CBP) uses an autonomous surveillance tower and a small unmanned aerial system to identify human movements and other “objects of interest” in place of surveillance by border patrol agents. However, the use of these devices proves to be harmful to immigrants. First, they mark these individuals as law violators, not people seeking safety and safety. Additionally, immigrants often take more dangerous routes to the border to avoid detection, increasing the number of immigrant deaths that disproportionately affect black migrants.
At entry points, the AI system officially makes entry routes that are more difficult for black immigrants. For example, the previously used CBP One app had to include a selfie in immigration applications to confirm that it was a “living person” through the traveler verification service. However, one technology of CBP often failed to recognize immigrants with dark skin tones. CBP inaccurately identified black faces at 10-100 times faster than white faces, according to Priya Morley of the border. Racified influence and meaning. Furthermore, CBP One was often inaccessible in the language and dialects of many black immigrant groups. This created an additional barrier to entries in these groups. The app is no longer available, but reviving the app remains a possibility under current control.
Even if immigrants overcome this stage, they can still be harmed under an automated targeting system (ATS). Under the ATS, the US uses data from multiple national and international databases to determine which individuals are likely to be overstayed. Risk assessments are common in immigration systems, but the ATS system perpetuates existing biases where a stay in the US is likely to exacerbate who is staying and who is likely to exacerbate harmful data points. For example, Nigeria was added to the list of countries facing increasing travel restrictions in 2020, so the AI system is now flagging Nigerians and other similar applicants as at higher risk.
Tools like the ATS typically fly under US government radar. This is because advocates say the tools are preventive rather than punitive. However, the use of these tools is directly contradicted with the US commitment under ICERD and the US commitment to supporting both negative obligations to avoid involvement in racism and positive obligations to mitigate structural racism.
Then, when immigrants enter the United States, they face discrimination from ice from internal enforcement and are in detention. ICE uses prediction algorithms such as “hurricane scores” to determine who deserves increased monitoring. There is a lack of transparency in factors that affect hurricane scores. The algorithm was provided by the private company Bi Incorporated, which has strong ties to the prison industry, so the government did not need to disclose the factors behind this score. However, the lack of transparency raises serious concerns about compliance with ICERD articles and national regulations, ensuring that discrimination does not persist and that immigrants are treated equally under the law. The lack of information also leads to lack of access to effective remedies to correct resolve by participants as “high risk.”
ICE also uses repository for analysis on a virtualized environment (Raven) platform to analyze trends and patterns across a set of data sources to further assess the risks immigrants may pose in the US. These data come from local data from enforcement and law enforcement agencies (full of disproportionate bias) from a series of offices in over 56 countries. Raven's scores can have a significant impact on the lives of these immigrants, but they do not have the opportunity to agree to provide information to the system or opt out. And there is a lack of transparency, as well as the use of hurricane scores.
Finally, under the immigration relief system, AI can be used by the US Office of Citizenship (USCIS) to sort evidence and detect fraud in applications. USCIS tends to use a training model called Asylum Text Analytics (ATA), a system that reads asylum application text to identify fraud. In many cases, ATAs can generally acquire non-English speaking English. This is especially true for people who speak the language without widespread adoption. ATA translates through the same provider as it could potentially kick out people with legitimate claims that contain similar phrases or stories to other applications.
Rather than simplifying the application process, USCIS uses AI-powered evidence classifiers to “review” millions of pages of evidence, ranging from medical records and photographs of USCIS arbitrators. These AI reviews can have a negative impact on immigrants who may have atypical documents, and often exacerbate racism.
The solution to stripping artificial intelligence is to ensure that US immigration policies embody collectivists rather than individualistic views. The 2001 Durban Declaration and Programme of Action, adopted by the UN General Assembly in 2002, recalls that colonialism was identified as the root cause of racism and racism, Baji and UCI clinics call for the application of colonial practices. Cosmo ubuntu As “a fundamental value system in participation in the persistence of the planet without forcing universality” in AI, including the voluntary acceptance of African Ubuntu (personality).
In contrast to the Western-centric individualistic view of humanity, African cosmology embraces the humanity of all human beings. ” AI decolonization, which forms part of its obligations under the ICERD, requires that the African diaspora play an important role in the process of conceptualizing, invention, innovation and operation of AI.
Additionally, Baji and the UCI School of Law Clinics have also made clear recommendations in their reports to the Department of Homeland Security, the White House, the Legislature, state and local governments. These recommendations are as follows:
- Ensures that individuals who may be adversely affected by AI use will be promptly notified of such decisions and will provide individuals with the option to opt out of AI systems when necessary.
- Enact federal laws governing the use of DHS AI.
- Prohibit and prevent the use of AI that produces racist consequences or exacerbates structural racism. and
- Delegation (i) effective anti-discrimination measures, (ii) independent monitoring of implementation, (iii) robust public disclosure, (iv) stakeholder consultation with diverse groups, and (v) access to effective relief by those affected negatively by DHS AI use.
- Adopt and amend city policies to include an explicit pledge that no information will be shared with DHS if it is expected to be used to develop or deploy AI by DHS or its vendors.
What is embedded in each of these calls is resonating. Until the government ensures that there is no discrimination in the systems deployed, the immediate termination of use of AI systems by DHS, until a variety of perspectives are meaningfully included in the development and use of AI systems.
