This is the first HRIA founded specifically on Canadian human rights law
The Law Commission of Ontario (LCO) has teamed up with the Ontario Human Rights Commission (OHRC) on the development of the first Human Rights AI Impact Assessment (HRIA) tool founded specifically on Canadian human rights law.
The tool, which was unveiled on Tuesday, is intended to aid organizations in evaluating the compliance of AI systems with human rights law through a structured framework that helps AI developers, managers, and owners to spot potential discrimination risks related to discrimination and drive human rights compliance throughout an AI system’s lifecycle, according to an LCO press release. The tool was developed for the use of governments, public agencies, and private sector companies.
The LCO confirmed that the HRIA is aligned with internationally recognized AI principles. The tool is intended to achieve the following:
The development of this Canada-specific HRIA is also in line with the promotion of “Trustworthy AI” frameworks to bolster the public’s confidence in the benefits, lawfulness, and accountability of AI systems, the LCO said. The majority of AI impact assessments are founded on the norms of “ethical AI” or on laws catering to America or Europe; moreover, the focus is on regulatory compliance-related concerns like privacy and data security.
LCO counsel Susie Lindsay has been appointed to head up the HRIA project for the LCO. She brings 12 years of experience in litigation. Lindsay also has a background in regulatory law from a previous stint at Bell Canada, where she was a regulatory counsel, and in administrative law from her time as an associate at Rogers Partners LLP.
The Canadian Human Rights Commission also contributed to the HRIA project. The LCO confirmed that an LCO HRIA background paper would be published.