Artificial intelligence is being used to enhance immigration and asylum cases in Canada by generating false narratives that include references to fabricated court decisions.
Both the federal government’s Immigration, Refugees and Citizenship Canada (IRCC) and the Immigration and Refugee Board (IRB), the independent tribunal that adjudicates asylum claims, have announced that they have detected AI being used in applications containing false or inaccurate information.
The Institutional Review Board said the use of AI in applying to stay in Canada as a refugee poses new challenges for staff.
“Recently, we have observed that appellate memos are becoming longer, but this increase in volume does not necessarily translate into stronger arguments. In fact, these documents sometimes contain references to non-existent case law or cite case law for proposals that they do not actually support,” the IRB said in a statement. “This makes our job unnecessarily complex and time-consuming.”
Thousands of asylum seekers could face deportation after immigration bill passed, warns
If misrepresentation, use of forged documents, or other types of fraud are confirmed, foreign nationals can be subject to a five-year ban from entering Canada.
The Canada Border Services Agency, IRCC, and RCMP are investigating immigration fraud.
The Institutional Review Board said it would alert partner organizations if it “identified potential integrity concerns through regular review of files by employees.”
The Immigration Department said it has discovered people using AI to forge documents, but will not publish examples to avoid helping fraudsters find ways to avoid arrest.
“We have observed instances where AI has been used to generate malicious applications,” IRCC spokeswoman Isabelle Dubois said. “While we work to detect and prevent fraud, publicly sharing these specific examples may help fraudulent claimants inadvertently identify other ways to avoid detection.”
IRCC has recently faced calls to step up investigations into irregularities in immigration files. Last month, it was heavily criticized by the Audit Office for failing to investigate more than 149,000 international students who were reported as not complying with the conditions of their study permits.
Karen Hogan’s report into international student programs run by IRCC concluded there were “significant weaknesses” in the department’s anti-fraud controls.
Since 2019, refugee courts have decided more than 45,000 cases without in-person hearings.
Max Berger, an immigration lawyer in Toronto, said he worries that “AI will become the new specter in asylum cases.” [immigration] consultant. “
“Ghost consultants who fabricate stories on behalf of some claimants are now the scourge of the asylum process. Instead of paying ghost consultants, the small number of asylum seekers looking to game the system will be able to ask an AI to fabricate a history of persecution for free,” he said in an email.
Thousands of asylum applications are decided by institutional review boards based solely on administrative procedures, without oral hearings. But Berger said asylum hearings allow the IRB to question applicants, including about false narratives.
“The antidote is to hold an oral refugee hearing whose credibility is tested by the Director of the Institutional Review Board,” he added.
In 2024, a federal court issued a policy directive directing attorneys and litigants to disclose the use of AI in court filings, including immigration cases.
According to IRCC’s AI Strategy published earlier this year, the department is experimenting with a number of AI tools, many of which are focused on fraud prevention. The ministry said artificial intelligence is helping detect false narratives. Machine learning tools are also being used to detect application anomalies and irregular travel patterns, which could indicate that a refugee or immigrant is from a different country or region than the one claimed.
AI system They are trained to detect the manipulation of documents such as academic records and bank statements, as well as artificially “distorted” photos that can be used to commit identity fraud or mislead immigration officials about one’s age.
Both the IRCC and IRB say they also use AI and other technology tools to improve efficiency, but not to make decisions, such as whether someone should be allowed to remain in Canada.
OPINION: Canada has a hidden asylum policy problem
The Institutional Review Board said in its department plan for 2026-27 that it would introduce tools to “support the rapid preparation of files”.
“While we are exploring the use of AI to improve productivity and optimize our overall operations, we do not plan to use the technology for judgmental decision-making,” the company said in a statement.
The court already uses audio-to-text transcriptions of statements made during refugee hearings, which are checked for accuracy. The company’s legal team is also using AI to draft summaries of federal court decisions, which will be reviewed by paralegals or attorneys before they are finalized, according to a statement from the Institutional Review Board.
In the department’s plan for 2026-27, the court said it plans to “accelerate file preparation and advance decision-making tools to support the drafting of decisions.”
The report said such tools “help decision makers generate reasons in a concise, focused, and accessible format. These tools are not intended to replace or limit decision makers, but rather to streamline decision making.”
Institutional review boards have already established mandatory AI training for employees. The company’s departmental plan states that it aims to use AI to reallocate effort on repetitive tasks and improve overall efficiency.
“This includes introducing tools to enhance triage capabilities, optimize schedules, and support rapid file preparation.”
