New privacy legislation will further regulate artificial intelligence

Includes requirements around the use of automated decision-making systems

New privacy legislation will further regulate artificial intelligence

New regulations proposed for artificial intelligence will affect how businesses develop, use and commercialize AI tools.

In Canada, the federal government has proposed new privacy legislation that would, among other things, include specific requirements around the use of automated decision-making systems. In the European Union, the European Commission proposed the “first-ever legal framework on AI” and would ban specific technology uses.

And in the United States, legislators are trying to develop a federal privacy law similar to the EU’s General Data Protection Regulation to address its patchwork of state laws.

Reforming domestic privacy legislation

In November, the federal government tabled privacy legislation that would give individuals greater control over their personal information.

Bill C-11, the Digital Charter Implementation Act, would, if passed, enact two new acts: the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act, and amend other acts.

Wendy Mee, co-chair of the privacy group at Blake, Cassels & Graydon LLP in Toronto, describes Bill C-11’s requirements concerning the use of automated decision-making systems as “not overly onerous” to comply with and primarily concerning transparency. It is “making sure that you are explaining when you are using automated decision-making systems to make predictions, recommendations or decisions, that people are aware of that and that they have the right to request an explanation of how their personal information was used in that process.”

Under Quebec’s Bill 64, which would amend several statutes in that province, companies must provide “more specific information at the time that the decision-making system is being used,” says Mee.

“The federal bill looks to be proposing disclosures about the use of AI in the company’s privacy policies,  whereas the Quebec bill is proposing notice at or before the time of processing [information], which in my mind suggests even greater transparency because no one reads the privacy policy,” she says.

One challenge of Bill C-11, if implemented as proposed, will be compliance across the board, says Aaron Baer, a Toronto partner in Renno & Co., a Montreal-based law firm focussing on startup and emerging technology law.

“If I’m running any [kind] of platform that can be used by anyone in the world, I may be collecting data from people all over the place, and each of these places has different legislation,” he says, noting the various provincial regimes and patchwork of laws across the U.S. as well.

Bill C-11’s definition of an automated decision system is also expansive, Baer says. In essence, it includes “any technology that assists or replaces the judgment of human decision-makers.” The definition could consist of a digital questionnaire that provides a few questions to a consumer and then issues a quick answer on whether he or she is eligible to receive a service or product, such as a mortgage.

Currently, at second reading, “it doesn’t look like it will fly through and get passed,” as proposed, says Mee. The delay could also be because of the significantly higher fines the CPPA would impose.

Europe’s GDPR and the new AI framework

Under the GDPR, there were already rules around automated decision-making, says Mee. “It’s similar to what’s been proposed in Canada in terms of transparency, but it goes much further and gives individuals the right not to be subject to a decision based solely on automated processing if it has a significant impact on the individual.”

The GDPR “is more robust than what’s proposed in Canada,” she says, and with more of the accountability requirements that the federal privacy commissioner has recommended.

On April 21, the European Commission published its Regulatory framework proposal on Artificial Intelligence, which protected individual rights. There is now a category of AI tools that would be prohibited. For example, so-called “manipulative AI” is used to manipulate human behaviour through “social scoring.” This process involves creating a score based on an individual’s behaviour that can affect their ability to access services.

Another category in the new framework is high-risk AI, “which is not outright prohibited but subject to heightened requirements to make sure that the appropriate checks and balances are in place,” Mee says.

The AI systems identified as high-risk include technology used in:

  • Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk;
  • Educational or vocational training that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of the authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

The high-risk category may include AI tools used in self-driving cars, for example, or biometric identification systems used in public spaces. Technologies in this category would go through an intense assessment before being implemented.

AI systems with specific transparency obligations would be of limited risk. These would include chatbots used for getting customer service on a website, for example, where users would be made aware that they were interacting with a machine and could decide to continue or not.

Although the proposed rules apply to any AI system developed or used in the European Union, “if passed, it’s going to have a massive impact on businesses globally, because you’re going to have to design your AI systems to meet those requirements, even if you’re not located in Europe,” Mee says.

An American patchwork

In one of the first popular votes on privacy regulation in the past year, California voters backed a ballot measure that created the first bespoke U.S. data protection agency and harmonized the state with the EU’s GDPR. Yet unlike Canada, which passed the Personal Information Protection and Electronic Documents Act (PIPEDA) in 2016, the United States has no federal privacy legislation applying to all businesses.

“Colleagues I chat with in the U.S. are skeptical” that the federal government will develop such legislation, Mee says.  Europeans meant for the GDPR to provide a unified privacy regime, but there are still deviations among member states.

“It’s a laudable goal, and I think businesses would welcome any type of harmonization of the privacy legislation because it’s becoming so challenging to know everything about every place,” she says. In the meantime, “a lot of states are making their own laws to cover the gaps.”

Planning for change

Companies should keep privacy rights in mind when designing and bringing tools, algorithms, and products to market, says Baer. Businesses should also start to think now about changes they may have to make. That’s not just about updating a privacy policy, but about “privacy by design, and really understanding your business’s data flow. Where does the information go? Where did you get it from? Who has access to it?

“You’ve got to take a real look at that,” Baer says, “because what [the government is] going to look to at the end of the day is what’s happening in reality, not just what you said you were doing.” 

EU’s first legal framework on AI

  • address risks specifically created by AI applications
  • propose a list of high-risk applications
  • set requirements for AI systems for high-risk applications
  • define obligations for AI users and providers of high-risk applications
  • propose a conformity assessment before the AI system is put into service or on the market
  • propose enforcement after such an AI system is in the market
  • propose a governance structure at European and national levels

Source: European Commission Regulatory framework proposal on Artificial Intelligence

Seven guiding principles for meaningful consent

1. Emphasize key elements

2. Allow individuals to control the level of detail they get and when

3. Provide individuals with clear options to say “yes” or “no”

4. Be innovative and creative in consent processes

5. Consider the consumer’s perspective

6. Make consent a dynamic and ongoing process

7. Be accountable: Stand ready to demonstrate compliance

Source: Office of the Privacy Commissioner of Canada