Report says use of AI could be violating human rights

The Canadian government’s use of artificial intelligence for decision-making in areas such as immigration needs independent oversight and carefully developed standards of use, says a new report from the University of Toronto.

Report says use of AI could be violating human rights
'The way that the government seems to be conceptualizing it, is that AI can be used to augment or replace human decision makers,' says Petra Molnar, a researcher at the U of T Law faculty’s IHRP.

The Canadian government’s use of artificial intelligence for decision-making in areas such as immigration needs independent oversight and carefully developed standards of use, says a new report from the University of Toronto.

U of T’s International Human Rights Program at the Faculty of Law, along with the Citizen Lab at the Munk School of Global Affairs, has released “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System,” which says the federal government has been using algorithms and other AI technology since 2014, turning immigrants and refugee claimants into lab rats for the new technology.

“Without appropriate safeguards and oversight, the use of AI is very risky,” says Petra Molnar, a researcher at U of T’s law faculty’s IHRP. “The impact on lives is very real because we're dealing with a population that doesn't have access to the same types of rights that others have.

“This is the last group that should be subject to technological experiments of this nature,” says Molnar, whose training is in refugee and immigration law.

The report recommends independent oversight of all automated decision-making by the federal government, publication of all uses of AI for any purpose and the establishment of a task force composed of “government, civil society and academia."

The government has algorithmically automated certain duties formerly done by immigration officers, such as evaluating immigrant and visitor applications. It has indicated plans to expand this technology including evaluating the legitimacy of a marriage, whether an application is complete, whether a person should be protected as a refugee or if an applicant is a security risk.

Molnar says Charter rights to freedom of expression, freedom of religion, freedom of mobility, privacy, equality and freedom from discrimination are at stake given the use of this technology.

AI already has a “problematic track record” on race and gender, exemplified in predictive policing, she says.

“As a society, we are really excited about the use of these technologies. But we need to remember that AI is not neutral. It's essentially like a recipe, right? It depends what you put in it,” Molnar says.

“And just like a human decision-maker, an algorithm will hold its own biases and replicate them throughout the system.”

Cynthia Khoo, a digital rights lawyer and fellow at the Citizen Lab, refers to a 2014 case in the U.K. where 7,000 foreign students were wrongfully deported because an automated voice analysis system proved unreliable.

Khoo says automated decision-making by the federal government raises serious questions about what happens with personal information.

“In addition, wherever you have people's personal data being used, whether fed into an algorithm for training or used as an input to get a recommendation about you, that raises a privacy rights question — what happens to your data, who is it given to and how are they using, protecting or sharing it, and why?” Khoo said via email.

The report was developed using interviews with government analysts, policies, records and public statements made by the government and information requests. The researchers are waiting on responses to 27 access to information requests, filed in April of this year.

The “limited” information the researchers obtained from government is “part of the problem” and led to their recommendation for greater transparency, says Khoo.

The researchers held a workshop with 30 experts at the Citizen Lab. Molnar says the report is different form the work the lab usually does because it is an attempt to canvas possible human rights violations before they occur.

“The question isn't are we going to be using AI but more, if we are using it, how are we going to ensure that it's done right in an accountable kind of framework of oversight, to make sure that the human rights ramifications are accounted for and make sure that we use these technologies critically and carefully,” says Molnar.

The report's authors have planned meetings with several federal departments including the Office of the Privacy Commissioner, Immigration, Refugees and Citizenship, Innovation, Science and Economic Development and the Treasury Board Secretariat.

Ian Stedman, a lawyer and PhD candidate at Osgoode Hall Law School, led a conference in February called Bracing for Impact: the Artificial Intelligence Challenge, to look at the implications of AI technology on industry, intellectual property, cybersecurity, accountability and society at large. Frequently referenced at the conference was the idea of automated decision-making within the public sector. The use of the technology in the immigration system is “shocking” to Stedman, saying he thought government was further behind, still devising assessment tools to determine if these practices “do more harm than good.”

“One of the call to arms that came up in the conference was that we can't just go into this blind, there has to be a lot of work going into these topics before we really start deploying AI to make decisions about humans that, you know, impact people's personal lives," he says.

Last year, the Canadian government announced a $125-million “artificial intelligence strategy,” which the finance department said would “position Canada as a world-leading destination for companies seeking to invest in AI and innovation.”

“It’s an interesting time because Canada is presenting itself and has been presenting itself, rightly so, as a leader in human rights, but it's also now presenting itself as a leader in AI,” Molnar says. “So it has this really unique and beautiful opportunity to be a leader in the responsible use of AI with human rights at the centre.”

Recent articles & video

Charter applies to self-governing First Nation’s laws, but s. 25 upholds Charter-breaching law: SCC

Parks Canada partnering with Indigenous groups to implement Indigenous systems of law, governance

Canada's Finest Legal Professionals: Celebrate Excellence at the Canadian Law Awards!

Are you keeping up with the dizzying pace of innovation?

Amanda Fowler on how she balances her sports law practice and legal role at Aviva Canada

Top 25 Most Influential Lawyers 2024 - nominations now open

Most Read Articles

Canada Revenue Agency announces penalty relief for bare trusts filing late returns

Ontario Court of Appeal upholds spousal support order in 'unusual' divorce case

Ontario Superior Court awards partner share in the estate despite the absence of marriage

Developing an AI oversight system is vital for organizations: Tara Raissi at Beneva