Benjamin Perrin on how the legal academy is adapting to the challenges of generative AI

Perrin spoke on the CL Talk podcast about teaching and researching in the age of AI

Benjamin Perrin on how the legal academy is adapting to the challenges of generative AI

Benjamin Perrin has always been critical of the justice system. Last year, we spoke to him about his book, which put the criminal justice system on trial and argued for a shift from tough-on-crime to trauma-informed approach.

More recently, he has shifted his focus to generative AI, examining the good and bad applications in the classroom and beyond.

He spoke on the CL Talk podcast about the implications of this technology on his students, fellow law professors, and the justice system in Canada and worldwide.

Below is a summary of the conversation:

When ChatGPT emerged two years ago, it initiated a wave of concern within education circles about plagiarism, says Benjamin Perrin, a law professor at the University of British Columbia. This prompted a re-evaluation of teaching methods and policies as students and educators navigate the rapid adoption of generative AI.

Perrin says that students now use AI tools to generate ideas, draft papers, and answer questions instead of consulting professors or peers. This widespread use has required universities to begin crafting rules on professional responsibility for AI use, though Perrin says that “the technology has moved far faster than any of the guidelines or rules.”

In his classes, Perrin adapts by emphasizing oral defences and other assignments that reduce reliance on AI-generated content. “The days of just having a student go off and write a paper and come back and submit it without any kind of concern about potential AI use are gone,” he says. His assessment methods aim to help students understand that “you’re actually not learning when you’re just cutting and pasting from an AI tool.”

Beyond education, Perrin has been increasingly exploring AI’s implications for the justice system, particularly the reliability of legal AI tools, in his teaching and research. He says for current legal research tools incorporating AI, there are concerns about these tools failing to meet minimum standards of accuracy, which could mislead users. To address this, he supports the development of industry-wide benchmarking to ensure tools meet established quality criteria. “The idea that a legal tool would be at all helpful for anyone … means that it could, at least … produce something that … a first-year law student could generate at the end of their first year,” he says.

AI's role in policing is another area of focus for Perrin’s research. On the positive side, he cites examples such as AI tools used to process large volumes of child exploitation images, reducing exposure for officers. “Rather than having an individual officer have to look at each of those horrific images and traumatizing them in the process, the AI tools are able to … dramatically limit the amount of time spent by a person,” looking at these disturbing images, he notes. However, he also highlights risks, including racial biases in facial recognition systems and privacy violations from tools that collect data without consent. “These are some of the positives and also dark sides of AI in policing,” he says.

Internationally, Perrin has looked at the use of AI in military applications, including autonomous weapon systems capable of making targeting decisions without human input. He points to ongoing efforts by the United Nations and the International Committee of the Red Cross to establish a ban on such systems. “The starting point for our research has been to document some of those weapon systems … to show how they operate and to identify areas where they have been used,” he says. He emphasizes the importance of addressing proportionality in conflict zones, where human lawyers embedded within military units currently make decisions about civilian harm.

Perrin also discusses the broader need for accountability when using AI tools in legal practice or judicial decision-making. He compares the use of AI to relying on research assistants or law clerks, stressing the importance of verifying and understanding the outputs. “Human oversight is the number one antidote to these concerns,” he says. While AI can save time and improve efficiency, Perrin emphasizes that its outputs must always be critically assessed to ensure accuracy.

This conversation can also be found here:

The episode can also be found on our CL Talk podcast homepage, which includes links to follow CL Talk on all the major podcast providers.