The rise of AI in every industry propels the need for AI regulation in Canada. Know more about the steps taken by the government and how they are so far
More people are now becoming hooked on AI. It’s not just commercial businesses that use it regularly, but also the common folk that rely on it for their daily lives, either for work or just for fun. From integrating it for work, to making your dogs’ still pictures move like videos, AI has been everywhere, and its growth has become unprecedented.
But how does AI regulation in Canada keep up with this fast technological development? At what stage are our laws right now when it comes to laws on AI?
Artificial intelligence (AI) regulation refers to either or both the legal promotion or restriction of its use by all sectors of society and its development by private companies. Part of this regulation is the ethics of AI’s use, both by individuals and industries. Collectively, AI regulation refers to:
Here’s a video that explains the impact of AI on Canadian economy, and how AI regulation may affect all of it:
AI has already been making rounds among the legal profession. Check out our Special Report on the Top Legal Tech, Service Providers, and Products in Canada to know which products are trusted by most lawyers nowadays.
As one of the global forefronts in AI development and regulation, some eyes will look at Canada’s position on AI issues. Aside from the pending Bill C-27, a Voluntary Code was released by the federal government.
Specific sectors and industries have also adopted their own guidelines with the use of AI. For example, some law societies of the different provinces have released guidelines for lawyers using generative AI.
The most prominent proposed law for AI regulation is Bill C-27 or the Digital Charter Implementation Act, 2022. It was sponsored by the Minister of Innovation, Science and Economic Development Canada (ISED), which was introduced in 2022. As of the writing of this article, it’s still pending before the Standing Committee on Industry and Technology (INDU) in the House of Commons.
When passed, Bill C-27 will create three new laws, namely:
The CPPA and the Tribunal Act are both previous subjects of Bill C-11, whose content is now incorporated with Bill C-27. Back in 2020, Bill C-11 was introduced but was not passed, until Parliament was dissolved in 2021.
Bill C-27 will also amend the Personal Information Protection and Electronic Documents Act (PIPEDA). It will repeal PIPIEDA’s Part 1 and will change its short title to “Electronic Documents Act.”
Consumer Privacy Protection Act (CPPA)
The CPPA aims to protect every person’s personal information, while at the same time regulate commercial activities that collect, use, or disclose personal information. Some may find that the CPPA works similarly to the PIPEDA; as it seems, it takes away from PIPEDA its regulation of personal information. This is why if Bill C-27 is passed, the PIPEDA will simply remain as the Electronic Documents Act.
Briefly, the CPPA will:
Simply put, the Tribunal Act establishes the Personal Information and Data Protection Tribunal, which shall have the following jurisdiction:
The forefront of Canada’s AI regulation is the third part of Bill C-27, which is the AI Act or AIDA. When enacted, it will be administered and enforced by the Minister of ISED and Artificial Intelligence and Data Commissioner. The main purposes of the AI Act are to:
regulate the trade and commerce in AI systems
establish requirements for AI systems’ design, development, and use
prohibit certain behaviours in relation to AI systems which may harm individuals
Here are some of the important highlights of the AI Act:
As part of AI regulations, there are specific requirements and obligations that the AI Act imposes upon developers of AI systems. For greater clarity, the law defines the specific “regulated activities” where these requirements apply. These are:
When an entity is engaged in any of the regulated activities, the following are their obligations under this AI regulation:
if they process or make available for use anonymized data: they must establish measures on how data is anonymized and its use or management
if they’re responsible for an AI system: they must identify if it meets the criteria for a high-impact system under the AI regulations
if an AI system is identified as high impact, they must:
establish measures to identify, assess, and mitigate the risks of harm or biased output arising from its use
if they do any of these activities, they must:
if they make available for use or manage a high-impact AI system: they must publish on a website a description of the AI system and its mitigation measures
The AI Act lays down the following offences when it comes to the use and development of AI systems:
In the context of AI systems, the harm may have either physical or psychological impact to a person. It can also mean substantial damage to a person’s property.
Penalties for the offences under this AI regulation can either be holding the offender liable for conviction on indictment or summary conviction. These include a fine and/or imprisonment, which can be imposed whether the offender is an individual or a corporation.
Canadian Artificial Intelligence Safety Institute (CAISI)
Another measure of AI regulation is the Canadian Artificial Intelligence Safety Institute (CAISI), which was just been recently launched by the federal government. The CAISI aims “to bolster Canada’s capacity to address AI safety risks” and for Canada to become “a leader in the safe and responsible development and adoption of AI technologies.” This is only a part of the federal government’s AI regulation involving ISED and other bodies.
Know more about CAISI with this video:
For other Special Reports that rank lawyers and law firms across the country according to geographical and practice area, head over to our page on Special Reports and Rankings.
Just like the federal version of AI regulation, most AI laws in the provinces are still pending in the halls of the lawmaking authorities. Here are some of the provincial legislations to look out for:
Socio-economic considerations, and preventing the creation of a dangerous superintelligence, are just some of the concerns when it comes to enacting AI regulations. Not to mention that the political implications of creating laws also affect how AI regulations are passed, not just in Canada, but in every jurisdiction.
The slow development of laws and regulations about AI can be an advantage for developers, but a disadvantage for those who feel they’re aggrieved by it. While it depends on what perspective you’re looking at, the lagging AI regulation is an overall con for all of us.
Without the proper guidelines, the efforts of companies and developers in pursuing AI may be halted, or even barred, by future laws that when passed, might not have kept pace. On the other hand, the public is kept unrestricted, whose activities may not always be in the good.
This slow, burning process of passing AI laws and regulations can be attributed to a lot of factors, including:
Aside from the lagging process of enacting AI regulations, its content is another contentious issue between lawmakers and the companies that develop these AI systems. Both sides agree that its development must be done in a cautious manner and that human rights must still be protected along the way. However, to what extent AI development and use must be regulated is still up for debate.
In this booming — and inescapable — age of AI, finding the right balance will take time and careful consideration, especially in passing laws and regulations concerning AI systems. In Canada, efforts are already underway; however, we still have to see how these laws apply to actual cases and how courts adjudicate these cases. For the meantime, we’ll have to make use of current laws that may apply on the matter, and of course, our innate human nature which distinguishes us from AI in the first place.
For more resources about AI regulation and other related topics, bookmark our page on Legal education.