Law Commission of Ontario’s 2nd report in project on automated decision making in justice system
Canada has a chance to proactively develop regulations to safely absorb the incoming wave of artificial intelligence and automated decision-making but needs to swiftly seize the opportunity to ensure rights are protected, says Nye Thomas, executive director of the Law Commission of Ontario.
The Law Commission of Ontario recently released the second report in their ongoing AI, ADM and the Justice System project. Regulating AI: Critical Issues and Choices analyses the AI and automated decision-making systems used by governments and public institutions, shows the regulatory gaps and proposes a framework to ensure governments protect human rights, uphold due process and stimulate public participation.
“The train is coming. We know AI and algorithmic tools have potential to bring extraordinary benefits, but also potential significant proven risks,” says Thomas. “The task is to build regulation to take advantage of the benefits while minimizing the risk.”
Most Read
The report found governments have begun to use AI in policing, bail, sentencing, welfare determination and to prioritize public services, among other areas. “Public sector interest in these systems is exploding,” says Thomas.
Whether they be regarding bail, welfare or access to healthcare, the decisions made by these systems impact people’s rights, he says.
“What our report does, is it really brings the issue to the attention of the public, hopefully, to stakeholders, and basically calls on governments and… all sorts of other public sector organizations to begin thinking about and tackling these issues before these issues arise,” says Thomas. “We're trying to be proactive. We're saying these systems have a track record. We know how to start regulating this stuff.”
Around the world, cautionary tales abound of the unmindful implementation of these systems, he says.
“We can do this more thoughtfully and more proactively here. And I think we should take the opportunity to do so.”
Through its Directive on Automated Decision-Making, the federal government is the only government in Canada that has tried to tackle regulation, the report found. While it is a good start, the directive’s scope is limited, says Thomas.
Aside from the federal government, no provinces, municipalities, government agencies, tribunals or other public sector organizations are equipped to directly regulate these systems and address issues around bias, opacity, due process and remedies, he says.
The report suggests a number of initiatives by which the public sector could begin regulating AI and automated decision-making systems.
First, provinces should adopt legislation modelled on the federal directive. Governments should also develop an AI register, creating a website listing all the algorithmic tools it is using.
Just as governments regularly conduct privacy impact assessments, they should commit to AI impact assessments as well. The assessments should be a “comprehensive disclosure of what the system is for, what its human rights impact might be, what data it uses, what remedies are available and how it is going to be evaluated,” says Thomas.
The report recommends governments commit to due process and procedural fairness protections, so affected people can challenge decisions rendered by these systems. And to make the systems transparent and legally accountable, there must also be “broad disclosure” of the data used, he says.