Charles Morgan at McCarthys says companies are still 'trying to figure out what this is'
A measure by the federal government to guide how Canadian companies use and develop generative artificial intelligence (AI) systems may be well-intentioned but is likely to have very little legal or practical significance.
The Canadian Guardrails for Generative AI – Code of Practice urges companies to adopt practices and principles about safety, fairness and equity, human oversight and monitoring, validity and robustness, transparency and accountability. But adopting it is strictly voluntary, which Charles Morgan, the national co-leader of McCarthy Tétrault LLP’s cyber/data group, points out not many companies have done.
“I think people are still just trying to figure out what this is and what the impact will be,” he says.
Latest News
Noting that while there is no force of law compelling companies to adhere to the code, Morgan argues that it still has a persuasive power. He adds that companies should fully understand what it means to adopt this or any similar code or best practices publicly.
“There’s always a legal liability risk when there’s a disconnect between what you promise to the world and what you do,” he says. “If you hold out to the world that you’re going to act in a certain way, and you don’t, that’s always the problem, whether that’s in your privacy policy, whether that’s in any commitment that a company makes to consumers and the public. There’s liability if you misrepresent what you do.”
If a client asked Morgan about signing the code or publicly proclaiming its best practices, he would caution the company to look inward first.
“No client should make a commitment that they’re not able to abide by,” he says. “First, they have to assess their own maturity, their own practices, the extent to which they are able to stand up [to the commitment]. You can’t sign a document unless it actually reflects what you are doing and what you can do. The first step in any process along these lines… is to really dig deep into your own data-governance processes, your own risk tolerance, and your own strategic positioning within the market. What is it that you want to promise to your customers? And what are you able to promise to your customers?
“I don’t necessarily feel that our clients need to sign up to this voluntary code, but that they should be thinking about AI governance policies and procedures that address many of the same types of issues.”
Morgan describes the code as being built on recognized principles of good governance and the responsible deployment of AI. “The principles and standards are also very much designed to help companies … avoid the risk of using data of poor quality, treating their customers unfairly or treating their customers in a way that is non-transparent.”
Morgan also says the code does a good job of moving beyond repeating high-level principles.
“What the code does that is valuable is actually taking things to the next level of granularity at two levels. One is by describing some of the specific measures that companies should consider implementing in order to meet each of the principles. And the second thing is to… distinguish between some roles that are different within the ecosystem so that there is that distinction between developers and managers, and there is that distinction between conduct and measures that need to be taken as regards to advanced generative systems in general, and those that are used for public use.”
Morgan is less than impressed with the way the code was developed and released. He explains that although the government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, The Digital Charter Implementation Act, 2022, it – or a revised and updated version of AIDA – isn’t likely to be adopted until later in the year. And any effects from its adoption will take two or three years to be felt. In the meantime, Canadians, businesses and the government are concerned about the impact of generative AI systems.
Given those circumstances, wanting to get a code out is understandable, but Morgan says the way the government went about it is less so.
“Isn’t it unfortunate that at the same time the government is actively promoting principles of transparency and governance and accountability, the whole process that has been adopted to produce the first iteration of AIDA, and now shortly, the second iteration of AIDA, and this guideline has been remarkably opaque. What a missed opportunity to have not engaged in broad public consultations to shape the content of this voluntary code. If you want people to buy in, then public consultation is absolutely the way to help ensure buy-in,” says Morgan.
“This is an area that just cries out for public consultation and consensus building and education. So many people are misinformed. So many people lack trust in the government about the way that AI is being used by both the public and private sector… And certainly, we have time. If this law is not going to come into force for two or three years, there’s plenty of time.”