Why banning AI in court is the wrong fix for fake case citations

Accountability – not prohibition – is the proper approach in the age of AI-generated errors

Why banning AI in court is the wrong fix for fake case citations

With hallucinated case law appearing in Canadian courtrooms, some judges and legal professionals are calling for stricter controls on artificial intelligence – ranging from disclosure mandates to outright bans on AI use in legal submissions.

But banning AI or stigmatizing its use misses the point.

As Zhang v. Chen, 2024 BCSC 285 – Canada’s first reported decision involving fake case citations generated by ChatGPT – demonstrates, the core issue isn’t the technology. It’s professional responsibility. Lawyer Fraser MacLean, who successfully challenged the false citations, told me: “The problem isn’t that AI was used. It’s that lawyers submitted citations to the court that were completely fabricated.”

In R. v. Chand, 2025 ONCJ 282, a similar situation led the Ontario Court of Justice to prohibit the use of generative AI for legal research in that matter after numerous erroneous citations appeared in defence submissions. These are cautionary tales – but the appropriate response is not prohibition. It’s verification.

The case for regulation over bans

Legal professionals remain responsible for the accuracy of their work, regardless of the tools they use. Whether the source is a junior associate, a legal database, or a generative AI model, submitting inaccurate or false citations breaches the same ethical obligations.

Legal ethicist Professor Amy Salyzyn, in her chapter AI and Legal Ethics in the book Artificial Intelligence and the Law in Canada (LexisNexis, 2021), argues that rather than banning AI, regulators should adapt existing rules. She proposes:

  • Requiring lawyers to understand and use “relevant” technologies that are “reasonably available”;
  • Making “reasonable efforts” to prevent unauthorized disclosure of client data;
  • Taking “reasonable steps” to ensure any legal tech used aligns with a lawyer’s ethical duties.

MacLean agrees – and adds that when AI-generated content is filed in court, lawyers should identify what was created with AI, for what purpose, and confirm that it was personally reviewed and verified for accuracy.

Ethical tools and smart policies

MacLean has experimented with legal-specific AI tools like Alexi, which rely on closed, verified databases. Unlike general tools such as ChatGPT, Alexi links directly to real cases and avoids hallucinations. But even then, MacLean emphasizes the need for human oversight: “The summaries can sometimes oversell the point. You always have to check the case.”

He argues that banning AI in law firms would be counterproductive. Lawyers would likely continue using it covertly – on personal devices or outside firm systems – introducing new cybersecurity and oversight risks.

Instead, MacLean recommends firms adopt internal AI-use protocols, including:

  • Requiring a printout of any cited case’s first page;
  • Providing AI training to detect hallucinated content;
  • Reviewing all AI-assisted outputs (including marketing materials);
  • Encouraging transparency and use within secure firm infrastructure.

What courts – and firms – should really do

So, what should courts be doing? MacLean believes disclosure requirements may be appropriate in limited circumstances – such as when AI-generated summaries are submitted in formal pleadings – but only when paired with a lawyer’s certification of accuracy.

Blanket bans or disclosure mandates for every instance of AI use – even for routine tools like spellcheck or grammar correction – are unnecessary. The focus should remain on results and accountability, not the underlying mechanism.

In Zhang, MacLean’s court submissions outlined the broader risks of unchecked AI use: flawed judicial reasoning, wasted resources, and reputational harm to the profession. But as he put it, the solution isn’t to “ban the plane” just because early flights had crashes.

Toward a trusted AI legal future

AI tools like ChatGPT are not suited for legal research. However, legal-specific platforms trained on reliable, permissioned data can increase efficiency and consistency across the legal system. As Canadian Lawyer has reported, AI adoption in law is accelerating – particularly among in-house teams.

Rather than suppressing that momentum, courts and regulators should support responsible innovation. That means setting clear rules, enforcing verification protocols, and maintaining ethical safeguards – not issuing reactionary bans.

The real risk isn’t the technology – it’s how we use it. The solution isn’t to ban AI – it’s to build the professional structures to use it wisely.