It’s not facial recognition technology that’s the problem, says Jeffrey Lem, but unethical use of it
The Toronto Star and other mainstream media outlets seem to have piled on in the latest policing controversy, criticizing the RCMP and several Greater Toronto Area police forces for having tested an online facial-recognition technology produced by an American company, Clearview AI, and praising the privacy commissioners of British Columbia, Alberta and Quebec (as well as the federal government) for launching plans to investigate Clearview.
Yet there seems to be confusion in both legal and lay circles over just how the Clearview software works and why the technology may engage privacy concerns. After all, there are several facial recognition technologies already in use worldwide in a wide variety of industries, with better technology and more innovative applications coming along every day. These technologies are universally based on sophisticated algorithms and deep learning models which produce incredibly accurate facial recognition that continues to improve over time.
While it is true that Clearview uses a similar sort of a facial recognition algorithm, that is about as far as the similarities go. Unlike other facial recognition solutions, the Clearview algorithm compares video footage or stills against a database of more than three billion images “scraped” from internet sources such as Facebook and Twitter.
Latest News
This practice has been condemned by privacy pundits as “invasive,” “unethical” and, by at least one commentator, “downright creepy.” Clearview clients provide an image of a subject (either a possible criminal or an unidentified victim) into the Clearview search engine, and the software then identifies any match in its database of internet images.
In contrast, other facial recognition software systems do not rely on images scraped off the internet, instead relying only on an existing database of legally obtained images. In the context of policing, this is a database of convicted criminals, known suspects, and victims: the so-called “mugshot book.” In other contexts, it is a database of photos that have been freely provided by members. For instance, where facial recognition is used in senior housing to help prevent residents living with dementia from wandering off, the database is comprised of photos provided by the residents and their families themselves, with informed and express consent in each case.
These ethical facial recognition systems are essentially automating a process which has traditionally been incredibly labour-intensive and, therefore, too slow. Unlike with the Clearview model, these ethical approaches to facial recognition are far less invasive because they do not depend on images surreptitiously scraped from social media sites.
There is no Canadian case law dealing directly with facial recognition artificial intelligence, but some guidance on the legality of facial recognition technology might be found in the seminal Charter case of R v. Fearon 2014 SCC 77, [2014] S.C.R. 621. In that case, the new technology in question wasn’t facial recognition, but rather the then relatively new concern of cell phones, and whether police could use messages found on a cell phone seized incident to arrest. Justice Cromwell, for the majority, noted that a “balance” needs to be struck between the demands of effective law enforcement and the right to freedom from unreasonable search and seizure. In Fearon, the Supreme Court of Canada found that this balance could be achieved by permitting searches of cell phones seized incident to arrest, provided, however, that such searches are thoroughly documented and subject to safeguards against “scope creep” and abuse.
It is almost certain that a similar balance will be struck in the context of facial recognition, with the law developing in a way that permits the use of emerging ethical facial recognition technologies that are buttressed by procedural and substantive safeguards, while suppressing or prohibiting the unlimited and unregulated use of less-than-ethical applications of the technology, such as the internet-scraping Clearview software.
Although coverage of the Clearview controversy suggests that facial recognition has only recently been introduced to law enforcement in North America, in fact police forces on both sides of the border have been using ethical applications of facial recognition for some time now as investigative tools (not as forensic technology to use for arrest and prosecution), in each case with extensive safeguards put in place to protect against the abuse of the technology.
Ben Su, a co-founder of AIH, a Toronto company that specializes in facial recognition technology, notes that recent advances in deep neural networks and algorithm engineering have led to significant breakthroughs in the effectiveness of facial recognition. According to Su, the better modern facial recognition algorithms have deployed pre-processing modules and deep-learning models to eliminate almost all the “false positive” and “data bias” problems that notoriously plagued the early versions of facial recognition technology.
Exacerbating the problem of Clearview’s software, its computer system in its New York headquarters was hacked in February, and the company’s client list and possibly even its database of three billion social media images may no longer be safe within its own servers. While it was arguably bad enough that Clearview had this database of scraped facial images in the first place, the fact that Clearview’s computer system was hacked gives fodder indeed to the privacy commissioners.
While Clearview may prove ultimately a temporary black eye for emerging technology industries, the ethical use of facial recognition technology is clearly here to stay and its use, both in the public and private sectors, will expand exponentially starting -- well, yesterday.