'We need to have the competence to question:' LegalTech panel on genAI fakes in the legal system

Conversation took place Wednesday at Canadian Lawyer's 2024 LegalTech Summit

'We need to have the competence to question:' LegalTech panel on genAI fakes in the legal system
Marlon Hylton, Veronica Mohan, Mirka Snyder Caron

Generative AI can make a child’s dreams come true. One panellist at Canadian Lawyer’s recent LegalTech Summit told attendees about her exceptionally creative five-year-old daughter, who conjures up fantastical tales about faraway magical lands while lying on her bedroom floor in the evening. The panellist used a genAI tool that takes text prompts and produces images to compile a storybook representation of her daughter’s creations. 

For a five-year-old, this technological capability is thrilling. For judges, juries, and the justice system, it is alarming.  

The panel, “Best practices for addressing ethical and other concerns with Gen AI and deepfakes,” delved into how lawyers and the legal profession must respond to and adapt to a new reality in which real and fake are much harder to discern.  

“The biggest ethical issue that we're facing with this technology has to do with competence,” said Danielle Robitaille, managing partner at Henein Hutchison Robitaille LLP. “And by that, I mean the ability to spot and question material that comes in front of decision makers.” 

Part of Robitaille’s practice involves conducting workplace investigations and reviewing investigations “that have gone awry.” Investigators are increasingly being presented with documents whose authenticity one side disputes.   

In “he said, she said” situations, a piece of independent evidence to corroborate one side’s version of events has typically been an effective way to mediate credibility and reliability issues, she said.  

“The difficulty is we can’t rely on these documents in the way that we could before because they're so easy to fake,” said Robitaille. One typical example is fake WhatsApp conversations created with ilovepdf.com. 

“We need to have the competence to question, and you may not have the tech-savvy to understand whether something is fake or not, but you have the resources and the wherewithal to figure that out.” 

There are experts available who can analyze documents, examine metadata and find extraneous information to determine its authenticity. As officers of the court, counsel must respond to the proliferation of genAI fakes by taking a hard look at the material opposing counsel presents and sometimes having “really uncomfortable conversations” with clients about their own documents, she said.  

“It's about questioning things that you may not have questioned before, approaching the evidence with an extremely skeptical view, and looking with a curious frame of mind to spot potential markers of fraud.” 

In a recent interview, Maura Grossman, a research professor at the University of Waterloo’s School of Computer Science and an adjunct professor at Osgoode Hall Law School, says every trial may require a forensic expert for authentication if courts become too inundated with genAI-produced evidence. This will drive up litigation costs. Lawyers will also increasingly have an onus to confirm the evidence submitted by their client is the genuine article, which she says will be “very challenging.” 

Grossman notes that organizations such as the Coalition for Content Provenance and Authenticity (C2PA) are developing a standard for AI-generated material, allowing for its simple identification. Eventually, standard markers could be attached to all AI-generated content, enabling the identification of when, where, and how the content originated.  

Robitaille was joined on her panel by Veronica Mohan and Mirka Snyder Caron. Mohan is the AVP and senior counsel for global privacy and cybersecurity at Manulife. Snyder Caron is assistant vice president of privacy and compliance at Foresters Financial. The moderator was Marlon Hylton, senior counsel and CEO at INNOV-8 Data Counsel & INNOV-8 Legal Inc. In addition to the threat of deepfakes, the panel discussed the potential benefits of AI adoption. They also examined best practices for developing and enforcing policies, procedures, and standards for train on genAI and how to effectively develop, vet, and evaluate AI-powered tools. 

According to Hylton, just as the technology creating fake material is growing in sophistication, so is the technology and expertise that is countering it.  

A short time ago, AI-created images and videos wore distinct markers. A person would appear with one nostril much bigger than the other, one ear that is too small, or too many fingers. But we are arriving at a time, he said, when the naked eye cannot identify a fabricated visual depiction.  

“There is software being developed on the detection side that is quite promising in helping lawyers and helping organizations, in general, to be able to detect when we receive fake documents, fake images, and fake videos. 

“Software is coming. So, there's hope.”