AI's Phantom Cases: A Growing Crisis in Canadian Courts

📊 Key Data
  • 211 non-existent cases cited as valid law in Canadian legal submissions since 2024
  • 82% of cases linked to AI-generated 'hallucinations'
  • 78.4% of fake citations came from self-represented litigants
🎯 Expert Consensus

Experts warn that AI-generated fake legal citations are a systemic threat to Canada's justice system, disproportionately harming vulnerable self-represented litigants and requiring urgent regulatory and technological safeguards.

8 days ago
AI's Phantom Cases: A Growing Crisis in Canadian Courts

AI's Phantom Cases: A Growing Crisis in Canadian Courts

TORONTO, ON – March 17, 2026 – A specter is haunting Canada’s justice system: fabricated legal precedents. A landmark study has revealed that since the beginning of 2024, Canadian courts and tribunals have identified at least 211 non-existent cases cited as valid law in legal submissions. The findings, part of a new report by legal tech company Courtready, span 111 separate judicial decisions and paint a startling picture of a systemic challenge fueled by the rapid adoption of artificial intelligence.

In a clear sign of the problem's origins, courts either found or presumed that generative AI was the culprit behind the fictional citations in 82 of those 111 decisions. This phenomenon of AI “hallucinations”—where models generate plausible but entirely false information—is no longer a theoretical risk but a documented reality that is actively clogging court dockets, undermining legal arguments, and threatening the very integrity of Canadian case law.

The 'Tip of the Iceberg': Uncovering a Hidden Problem

The issue is not only present but escalating at an alarming rate. According to the Courtready study, which analyzed decisions published on the Canadian Legal Information Institute (CanLII), the number of decisions flagging fictitious cases skyrocketed from just 7 in 2024 to 80 in 2025. In the first ten weeks of 2026 alone, another 24 decisions have been added to the tally.

The problem has become a national phenomenon, touching 42 different courts and tribunals across the country, from small claims courts to federal bodies. This indicates that no level of the justice system is immune to the infiltration of these digital ghosts.

Researchers warn that the 211 documented instances represent a conservative floor, not a ceiling. In nearly half of the decisions, judges noted the presence of fake cases but did not specify the exact number of faulty citations, meaning the true total is likely much higher. The data also exclusively captures what judges have caught and documented in written decisions.

“This is likely the tip of the iceberg. What we are seeing in these decisions is only what the courts or tribunals have caught,” said Tom Macintosh Zheng, a former litigator and co-founder of Courtready. “For every non-existent decision flagged by a court, there may be others that were never detected.”

This raises a chilling prospect: that an unknown number of AI-generated fabrications may have already slipped through the cracks, potentially influencing legal outcomes without the knowledge of judges or opposing counsel.

Justice Denied: The Vulnerability of Self-Represented Litigants

While the issue affects the entire legal ecosystem, the data reveals a deeply concerning trend: it disproportionately harms the most vulnerable. In a staggering 78.4% of the cases involving fake citations—87 of the 111 decisions—the submissions came from self-represented litigants (SRLs).

These are individuals navigating the bewildering complexity of the legal system without a lawyer, often due to financial constraints. For them, free or low-cost generative AI tools can seem like a powerful equalizer, a way to conduct legal research and draft documents that would otherwise be out of reach. Instead, this reliance has become a trap.

Unaware of the technology's propensity to hallucinate, many SRLs have unknowingly built their arguments on a foundation of sand. The consequences are severe. In Quebec, one self-represented individual was fined $5,000 for submitting AI-generated “jurisprudence” containing eight fabricated citations. In other cases, the discovery of fake precedents has led to the complete dismissal of legal arguments, saddling individuals with adverse judgments and cost awards.

This reality transforms AI from a potential tool for access to justice into a significant barrier, creating a two-tiered system where those with the resources and training to verify information are protected, while those without are exposed to new and insidious risks.

A Patchwork of Rules: The Legal System Responds

The judiciary and legal regulators are not standing by idly, but their response has been a developing patchwork of rules and directives across different jurisdictions. The Federal Court of Canada now requires litigants to declare if AI was used to generate content in their submissions. In Ontario, a new rule of civil procedure compels lawyers to certify the authenticity of every legal authority they cite, a direct measure to combat AI hallucinations.

Courts have also begun imposing sanctions on legal professionals who fail in their duty of verification. In the 2024 British Columbia case Zhang v. Chen, a lawyer was ordered to personally pay costs after their brief included two non-existent cases generated by ChatGPT. These rulings send a clear message: the professional obligation to ensure accuracy remains paramount, and blaming the technology is not a defense.

However, the approaches vary. While some courts, like the B.C. Supreme Court, have advised judges against using AI themselves, others are adopting “human in the loop” rules that permit AI-generated content as long as it is thoroughly vetted by a person. This fragmented response highlights the challenge of creating a unified standard for a technology that is evolving faster than regulatory frameworks can keep up.

Fighting Fire with Fire: Technology's Answer to AI's Flaws

Just as technology is the source of the problem, some are turning to it for the solution. In response to its own findings, Courtready has developed CaseCheck, a tool designed specifically for the Canadian legal landscape to help users verify their citations before they are filed.

The tool allows a user to upload a list of cases, which it then extracts and prepares for verification against a comprehensive database of Canadian case law. Crucially, its design philosophy emphasizes keeping a human in control. Rather than having one AI check another's work, CaseCheck facilitates a faster, more reliable manual verification process, ensuring a person makes the final judgment on whether a case is real.

To promote transparency, the company has also launched the 'Fictitious Citations in Canadian Courts' database, a free, bilingual, and publicly accessible tracker that is updated weekly. This initiative provides an ongoing resource for researchers, journalists, and legal professionals to monitor the scale of the problem.

The emergence of tools like CaseCheck underscores a critical lesson in the legal profession's encounter with AI: technology can be a powerful assistant, but it cannot replace the fundamental duties of diligence, verification, and professional judgment. As AI becomes more integrated into legal practice, the demand for robust, human-centric safeguards will only continue to grow.

Sector: Software & SaaS AI & Machine Learning
Theme: Generative AI Antitrust
Event: IPO
Product: ChatGPT
Metric: Revenue EBITDA

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 21577