Canadian courts turning an eye to how artificial intelligence is used in the legal system

Yukon Supreme Court and Manitoba Court of King’s Bench have put out practice directives on AI use

Canadian courts turning an eye to how artificial intelligence is used in the legal system
Yukon Supreme Court Justice Suzanne Duncan

Canadian courts are weighing in on using artificial intelligence for legal submissions, with at least two chief justices issuing practice directives telling counsel they must advise the court of the tool used and for what purpose.

“There is no regulation at the moment on the use of ChatGPT or any kind of new AI tool,” Yukon Supreme Court Chief Justice Suzanne Duncan said in an interview. “It’s in its very early days but rapidly evolving. And we want to make sure that there is awareness for everyone involved in a case before it goes to court that, if AI is being used, it is transparent and fair.”

Chief Justice Duncan recently issued a directive that says: “Artificial intelligence is rapidly developing. Cases in other jurisdictions have arisen where it has been used for legal research or submissions in court. There are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence.

“As a result, if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the Court, they must advise the Court of the tool used and for what purpose.”

ChatGPT is a generative AI program that produces content in response to prompts from a user. While the chatbot can write coherent and human-like responses using a massive database of information, the content is not always reliable.

The Yukon directive is like one sent out on June 23 by Chief Justice Glen Joyal of Manitoba’s Court of King’s Bench, saying Manitoba lawyers and self-represented litigants are now required to disclose the use of artificial intelligence in submissions prepared for the court. This includes identifying which AI tools and how their application.

Chief Justice Duncan said a recent incident at Manhattan federal court —where two lawyers blamed ChatGPT for including fictitious case law in a court filing — highlights the potential pitfalls of the technology.

The two New York City lawyers used the tool to find legal precedents that supported their client’s case. Instead, ChatGPT invented non-existent legal cases and opinions that appeared in the lawyers’ filings. The two lawyers and their firm were fined $5,000.

“It’s something that I’ve been following with great interest, especially with the publicity that ChatGPT has been getting,” Chief Justice Duncan said.

Manitoba’s Chief Justice Joyal also cited as reasons for his directive the “legitimate concerns” about the reliability and accuracy of the information generated by AI programs and the need for ongoing discussions about its responsible use in court cases.

"While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence," the order states.

“It’s a modest first step, which is using a tone that’s both cautionary and anticipatory,” Joyal said in an interview with the Winnipeg Free Press. “We know that AI is around now, and it’s probably going to be used increasingly in courts like it will [be] elsewhere.

“We don’t know how it’s going to be used, and we don’t know how it’s going to evolve, but we have to be cautious with respect to it.”

Chief Justice Duncan says at this point, the Yukon court directive is written broadly to help gain insight into if, and how, AI is applied in the court system. “It is not to discourage the use of AI in legitimate ways,” she said. “We know that it can create great efficiencies in terms of time and cost.”

She added: “It’s just to make sure that we know it’s being used and for what purpose . . . so that if there are any issues, they can be, they can be dealt with.”

The judge also said that the practice directive targets legal research and legal submissions, not AI tools designed for other purposes, such as writing and grammar. “For example, Grammarly uses AI, and it’s not the intent [of the directive] to have tools like that be discouraged or prevented from being used.”

Using virtual tools for legal submissions has become vital these days, she says. “Legal platforms or databases like Westlaw, Carswell or CanLII have revolutionized legal research. But we know those databases. We know their parameters, their limitations, and their reliability. But here [with AI], we’re not.”

The issue of how self-represented litigants use AI tools like ChatGPT is also important, as they may not be lawyers or have legal training.

Chief Duncan said there are an increasing number of self-represented litigants, which “throws a whole other light and potential complexity into the mix.” They may not be able to rely on the long-standing databases used by lawyers and instead use ChatGPT or other AI tools, which may steer those representing themselves in the wrong direction.

However, Chief Justice Duncan said that it is “going to be fascinating” to see how AI progresses and how it could help bring access to justice to more people by lowering costs and giving self-represented litigants more tools. The broad nature of the practice directive acknowledges that potential, she said.

“The practice directive is a quick way to get information out there to the profession, but it’s also a very flexible tool we can amend simply and quickly. The intention is to have a tool that, from the court’s perspective, [is] easy to revise as we learn more.”