Her legal team advises on partnerships, agreements, and the social impacts of AI
As artificial intelligence continues to redefine the zeitgeist, legal professionals are keeping a close eye on developments and what they could mean for internal practices. For Justine Gauthier, managing regulations and expectations around AI is a way of life – and something that she takes incredibly seriously.
As general counsel and head of AI governance at Mila, the Montreal-based artificial intelligence research institute, she tells Lexpert that new tech offers both amazing opportunities and heavy challenges.
“We work as a team on the legal and corporate affairs side, including myself," Gauthier explains. "We are four lawyers, a coordinator, and on the AI governance side, I have an AI governance lead, and we also work closely with responsible AI experts at Mila as well.”
The scope of their work spans advising tech teams, business units, and executives on various legal issues, many related to research and development of AI technology. This involves drafting and negotiating technology agreements and partnerships and handling technology transfers due to the academic affiliations of their researchers.
"A lot of our researchers are affiliated with universities," Gauthier adds, noting partnerships primarily with the University of Montreal and McGill.
AI governance at Mila, although recently formalized, has always been intrinsic to their operations.
“AI governance was always a part of our DNA," Gauthier emphasizes. "We work on both our internal governance of AI at Mila... and we are very keen to help outside organizations, both our partners and other organizations, private and public sector.”
As AI evolves, the legal frameworks surrounding this technology must also evolve. Gauthier recently worked on a paper proposing a new modifiable-type open-source license alongside co-authors Paul Gagnon and Misha Benjamin.
"The Modifiable-type (M-type)…. is something that the co-authors of this paper, including myself, came up with... the idea behind this M-type license is that historically, open-source contributions have been deemed irrevocable. Once you publish something in an open-source manner, it's out there forever."
This license aims to adapt to the rapid changes in the regulatory landscape by allowing contributors to modify or revoke their licenses in good faith if unforeseen issues, like biases in AI models, arise.
"This is really a new mechanism we're proposing to the open-source community to give back a little bit more power to the contributors," Gauthier adds.
AI governance frameworks were a major topic at the recent Canadian Privacy Symposium (CPS) 24, where Gauthier participated in a panel discussion.
"The first thing to keep in mind is that there's no one framework that fits all," she says. The context of the organization, whether it's a technology company or not, greatly influences the framework's design. It's essential to understand the specific AI use cases, whether developing AI products internally or procuring AI systems from third-party vendors.
One critical component of an effective AI governance framework is the multidisciplinary involvement of various teams within an organization.
"It can't just be the lawyers looking at this. That's not going to work. It also can't just be the tech teams and the developers looking at this on their end," Gauthier says. "Everyone kind of needs to work together to cover the different angles."
At Mila, addressing the ethical and social implications of AI is paramount.
"This work at Mila and at a lot of other places has been happening now for a long time," Gauthier says. "For Mila, it's never been a nice-to-have or just a reputational issue. It's always been at the core of our DNA.”
Mila's commitment to ethical AI is evident in its integration of research on the social impacts of AI. They have machine learning scientists and professors Catherine Régis and Golnoosh Farnadi, who focus on the ethical dimensions of AI. Internally, Mila's AI projects follow a specific governance framework, and responsible AI experts guide teams through the process.
"Conducting impact assessments is a big part of that," Gauthier explains.