The organization C2PA is working on creating standards for marking AI-generated material
While much of the legal profession has been focused on generative AI’s (GenAI) potential to hallucinate fictitious caselaw, Maura Grossman says she is more absorbed in the fact that we are entering a world in which it will be increasingly difficult to distinguish between genuine and AI-generated evidence.
The risk of deepfake evidence comes from the rising quality of AI-generated text, photos, video, and audio produced by free, open-source GenAI tools.
“That's going to be a really significant challenge for the court system,” she says.
Latest News
Grossman is a research professor at the University of Waterloo’s School of Computer Science and an adjunct professor at Osgoode Hall Law School. She is also an affiliate faculty member of the Vector Institute and an eDiscovery attorney and consultant in Buffalo, New York.
Grossman began her career working at a law firm in New York before developing an interest in digital evidence – how to find the needle-in-the-haystack document in a hard drive full of them. That led her into the fields of machine learning and AI, and she left her legal practice to collaborate with a professor at Waterloo’s School of Computer Science, where she researched how machine-learning technologies could learn to find evidence better than lawyers.
Through her work, her interest in responsible and trustworthy AI grew. She now teaches computer scientists at Waterloo about the legal, ethical, and policy considerations relevant to their work. At Osgoode, she teaches law students about technology and its implications for legal rights, discrimination, and surveillance.
If courts become inundated with GenAI-produced evidence, that could mean that every trial will require a forensic expert to distinguish the real from the fake, she says, which will drive up litigation costs.
Typically, when a client gives their lawyer a piece of evidence – an audio recording of the client’s spouse verbally abusing them for a family law case, for example – there is no requirement for the lawyer to authenticate it.
“But I'm starting to be concerned that as we move forward, there's going to be more of an onus on lawyers to confirm that evidence given by their own client is likely to be correct, accurate, or true,” says Grossman. “That's very challenging.”
Grossman expects that there will be markers attached to identify AI-generated content that will enable determination of when and where the content originated.
“But it's going to take some years before we have a set of standards for every device to do this,” she says. In the meantime, it is possible to remove watermarks from AI-generated photos, for example, or to add them to genuine ones.
Grossman says organizations such as the Coalition for Content Provenance and Authenticity (C2PA) are working on creating a standard for AI-generated material that would allow for its simple identification.
“Eventually, we'll get there,” she says.
When it comes to AI-powered legal tech, Grossman says it is essential for lawyers and law students to become familiar with the tools because they will be prolific for evidence and in many other facets of the practice.
“It behooves lawyers to get educated in this area and to make some decisions about what fruitfully can be used in practice and what is not ready for primetime yet. That includes learning how to vet the tools and how to evaluate them.”
“It's a buyer beware sort of atmosphere right now,” says Grossman. “But it's also very exciting. There’s going to be big changes, I think, in the way that law is practiced.”