Canada Battles AI-Generated Fraud in Asylum Applications Canadian federal authorities have warned that artificial intelligence is being exploited to create fraudulent information in immigration and refugee applications, even as government agencies leverage the same technology to combat such schemes. Immigration, Refugees and Citizenship Canada (IRCC) and the Immigration and Refugee Board (IRB), an independent tribunal overseeing asylum cases, have confirmed identifying instances where AI was used to fabricate details within submissions. IRCC spokesperson Isabelle Dubois told the Globe and Mail that the department has observed cases where AI tools were employed to generate deceptive applications. She emphasized that while efforts to detect and prevent fraud are ongoing, sharing specific examples could inadvertently assist fraudsters in evading detection. The IRB highlighted that the rise of AI-generated fraud poses a significant challenge for its staff. Appeals are becoming more prolonged, yet the increased volume of cases does not always correlate with stronger legal arguments. Officials noted that some submissions include references to non-existent court decisions or legal precedents that do not align with the claimants’ actual positions. This has introduced unnecessary complexity and delays in the adjudication process. In a statement, the IRB acknowledged that the trend complicates its operations, requiring additional resources to verify the authenticity of claims. Toronto immigration lawyer Max Berger described AI as the next evolution of “ghost consultants”—individuals who fabricate documentation or narratives for asylum seekers.#canada #immigration_refugees_and_citizenship_canada #immigration_and_refugee_board #max_berger #royal_canadian_mounted_police
