Legal Tech Fights Back Against AI’s Courtroom ‘Hallucinations’
- 1,000+ legal decisions worldwide involving AI-generated errors documented by researcher Damien Charlotin
- $2,500 sanction imposed on attorney Heather Hersh in Fletcher v. Experian for filing a brief with 16 fabricated quotations
- 73% of legal professionals cite incorrect or hallucinated outputs as their top concern with AI
Experts agree that while AI offers efficiency in legal drafting, its tendency to produce 'hallucinations' poses significant risks, necessitating robust verification tools like RealityCheck to ensure accuracy and maintain professional integrity.
Legal Tech Fights Back Against AI’s Courtroom ‘Hallucinations’
WASHINGTON, March 10, 2026 – As courts nationwide grapple with a surge of fabricated citations and misleading arguments generated by artificial intelligence, legal technology firm BriefCatch today launched RealityCheck, a new tool designed to help lawyers and judges verify legal authorities before they are filed. The move comes as the legal profession faces a crisis of confidence over the use of generative AI, with mounting cases of sanctions and professional reprimands for attorneys who submit flawed, AI-assisted work.
BriefCatch, a platform known for improving the clarity and persuasive power of legal writing, is expanding its focus to address the significant risks posed by AI "hallucinations"—the term for when AI models invent facts, quotes, or legal precedents. The new RealityCheck capability aims to serve as a critical safeguard, working directly within a lawyer's workflow to catch these errors before they reach a judge's desk.
"Once courts are running filed briefs through RealityCheck, the calculus changes for every litigator," said Ross Guberman, founder and CEO of BriefCatch, in a statement announcing the launch. "The question isn't whether to verify your citations. It's whether you want the court to find the errors before you do."
The Rising Tide of AI-Generated Errors
The launch of RealityCheck is a direct response to a rapidly escalating problem. While generative AI offers unprecedented efficiency for drafting documents, its tendency to produce plausible-sounding but entirely false information has created a minefield for practitioners. A database maintained by legal researcher Damien Charlotin has already documented over 1,000 legal decisions worldwide involving AI-generated errors. The trend is accelerating, with the number of documented instances jumping significantly in the past year.
These are not minor typos. The errors include citations to non-existent cases, fabricated quotations attributed to real judges, and legal arguments based on misstated holdings. The consequences for lawyers have been severe, ranging from monetary sanctions and public admonishments to case dismissals and attorney disqualifications.
The issue came to widespread attention in the 2023 Mata v. Avianca case, where a lawyer was sanctioned for submitting a brief containing multiple fictitious case citations created by an AI tool. Since then, courts have shown decreasing patience. Judges are now using existing powers, such as Federal Rule of Appellate Procedure 46(c) for "conduct unbecoming a member of the bar," to punish the submission of unverified AI output, signaling that the ethical duty to ensure accuracy remains squarely with the attorney, regardless of the tools used.
A Case Study in Sanctions: Fletcher v. Experian
The risks became starkly clear in a recent Fifth Circuit Court of Appeals decision. In Fletcher v. Experian Information Solutions, Inc., decided in February 2026, the court sanctioned appellate counsel Heather Hersh $2,500 for filing a brief "riddled with fabricated quotations and assertions."
In her opinion, Chief Judge Jennifer Walker Elrod found that counsel had used generative AI to draft a substantial portion of the brief and "failed to check the brief for accuracy." The court identified 16 fabricated quotations and five other serious misrepresentations of law and fact. The situation was exacerbated by the attorney's initial lack of candor, as she first blamed "publicly available versions of the cases" before admitting to using AI. The court made clear that a more forthright admission might have resulted in lesser sanctions, underscoring the importance of accountability.
In a powerful demonstration of its own claims, BriefCatch applied RealityCheck to the sanctioned Fletcher brief. According to the company, the tool not only identified every error cited by the Fifth Circuit but also flagged seven additional errors the court did not mention, including more fabricated quotations and a citation that resolved to an entirely different case.
"RealityCheck will help lawyers fix these errors before courts and opposing counsel find them," Guberman stated. "We want to improve the integrity of filings across the country."
A Two-Layered Defense Against Inaccuracy
To tackle the complex nature of AI hallucinations, RealityCheck employs a two-layer verification process. The first layer is a deterministic check that validates citations against authoritative legal databases. Powered by infrastructure from Counsel Stack, this system cross-references reporter volumes, court identifiers, and case names to detect phantom cases or misidentified authorities.
The second, more sophisticated layer uses AI models to perform a deeper analysis. It evaluates whether quoted language actually appears within the cited opinion and assesses if the authority genuinely supports the legal proposition for which it is cited. This is designed to catch the more subtle and dangerous errors, such as misstated holdings or out-of-context quotes that mislead the reader.
This launch marks a significant strategic evolution for BriefCatch. Following a recent Series A funding round and the acquisition of WordRake's editing technology, the company is moving beyond its core mission of writing improvement and into the critical new areas of citation verification and AI risk management.
Navigating a Cautious but Curious Market
BriefCatch enters a competitive and complex market. Major legal tech incumbents like LexisNexis and Thomson Reuters (Westlaw) have already integrated their own AI-powered verification features, aiming to keep users within their ecosystems by grounding AI outputs in their proprietary, trusted databases. A host of other startups, such as Beyond Assured and CiteCheck AI, are also offering specialized tools to combat hallucinations.
Despite the availability of these tools, significant barriers to adoption remain. A 2025 survey revealed that while nearly 70% of legal professionals now use generative AI for work, a deep-seated skepticism persists. Over 73% of legal professionals cite incorrect or hallucinated outputs as their top concern, followed closely by worries over data security and a general lack of trust in the technology's reliability.
However, the pressure to adopt is immense. Corporate clients increasingly expect their law firms to leverage technology for efficiency, and firms that strategically integrate AI are seen as having a competitive edge. This has left many legal teams moving past the debate of whether to adopt AI and focusing on how to do so responsibly. The demand is for tools that are not only powerful but also transparent, secure, and easily integrated into existing workflows—a niche that RealityCheck aims to fill for litigators and, increasingly, for the courts themselves.
