When it comes to AI regulation, Canada can do better

In 2018, the Future of Life Institute — a think tank led by tech industry leaders such as the founders of DeepMind, Google executives and Elon Musk — pledged not to develop artificial intelligence weapons that could make decisions to take lives and called for regulation to prevent their development.

Christopher Alam

In 2018, the Future of Life Institute — a think tank led by tech industry leaders such as the founders of DeepMind, Google executives and Elon Musk — pledged not to develop artificial intelligence weapons that could make decisions to take lives and called for regulation to prevent their development.

Most of us hope not to encounter autonomous lethal robots in our lifetime. But as AI pervades more benign fields, including autonomous vehicles, financial management and health sciences, regulatory bodies around the world continue to suggest and debate a range of approaches toward its governance.

Globally, countries are considering the guiding philosophies behind regulation.

Last year, U.S. Federal Reserve Governor Lael Brainard proposed that AI regulation in the financial services sector be balanced, that it mitigate risk without hampering innovation and that regulation of supervised institutions not be so onerous that it drive experimentation to unsupervised areas. Almost simultaneously, the EU’s AI ethics chief, Pekka Ala-Pietila, proposed that there should be virtually no efforts on regulation in order to allow innovation.

In the same year, the Israel Innovation Authority issued a paper entitled “The Race for Technological Leadership” in which it asked the question, “To protect the public, should there be a stringent threshold that requires the production of a retrospective account of the algorithm’s decision-making process, or would it be better to lower the bar of culpability in order to promote the adoption and development of AI-based innovation with all its benefits?”

Regulatory authorities are not aligned in their philosophical approach.

Canada — despite interest in innovation in recent years — shows less engagement.

In the United States, conversely, AI regulation is a focus across a number of federal departments. An executive order issued by the Trump White House earlier this year has led to a remarkably robust road map for the U.S.’s AI strategy. This road map addresses ethical, legal and societal implications, aiming at AI that can be deployed safely and securely. It also dovetails with other legislative initiatives, such as the Open Government Data Act. In Canada, open government data remains a weak, non-legislated initiative of the Treasury Board. More recently, the Treasury Board also issued a 2019 Directive on Automated Decision-Making to begin guiding government departments on such usage.

In April, the U.S. Food and Drug Administration rolled out a paper requesting comment on how best to deal with regulating AI-driven medical devices in a way that complements standard pre-market approvals with a comprehensive total product life cycle monitoring process. The 2019 Canadian federal budget, on the other hand, only proposes — without much detail — a regulatory sandbox for the development of AI products.

A positive Canadian example is the federal government’s approach to the regulation of driverless cars. Last year, it rolled out development guidelines, which dovetail with provincial approaches to road testing. The result is a cohesive regulatory framework that will provide those wishing to engage in product development with a firm understanding of how to proceed in Canada.

Nonetheless, Canada should make AI regulation and engagement with the philosophies that guide it a greater priority. Risk mitigation concerns aside, without a quickly proceeding regulatory framework, AI developers may look outside Canada to jurisdictions that induce confidence through governance. And without contributing to the discussion on philosophy of regulation, we may be dragged along by first movers.

Christopher Alam is a partner at Gowling WLG and head of the firm’s Toronto Real Estate and Financial Services department. He speaks and writes on AI regulatory matters at www.jurisai.ca. Follow him at @aijuris.

Recent articles & video

SCC confirms manslaughter convictions in case about proper jury instructions on causation

Law firm associate attrition continues to decline, NALP Foundation study shows

How systemizing law firm work allocation enhances diversity efforts and overcomes affinity bias

Dentons advises Saturn on $600 million acquisition of Saskatchewan oil assets

Ontario Court of Appeal upholds anesthesiologist’s liability in severe birth complications case

BC Supreme Court assigns liability in rear-end vehicle collision at Surrey intersection

Most Read Articles

BC Supreme Court rules for equal asset division in Port Alberni property dispute

BC Supreme Court rules vehicle owner and driver liable for 2011 Chilliwack collision

BC Supreme Court upholds solicitor-client privilege in medical negligence case

Top 20 personal injury law firms for 2024 revealed by Canadian Lawyer