Remedies include appropriation of personality, harassment, copyright, defamation, and privacy
As “deep fakes” become a regular feature of disinformation in our society, attendees at a recent Ontario Bar Association webinar heard that there are legal tools, though with limitations, that can be used to combat the unauthorized use of someone’s image or voice.
“By no means is the law perfect,” said Pablo Tseng, partner with McMillan LLP. “Having said that, have we been caught with our pants down just because there is a new technology in place? The answer is no, we haven’t.”
Tseng said that current common law and statutory law are “readily applicable” to new technologies such as those used to create deep fakes and can be used in many situations.
These include tools to deal with issues such as appropriation of personality, intention of infliction of mental suffering, harassment, defamation, and intrusion upon seclusion. “These principles transcend technology - so is there an angle under which they might apply to deep fakes? Certainly.”
Deep fakes, in their simplest form, are synthetic media that can take various forms, including text, image, audio, or video. They can be automatically synthesized or digitally altered by a machine learning system.
An example of the controversy that can come with such imitation is a lawsuit launched by movie star Scarlett Johansson, known for roles such as Natasha Romanoff/Black Widow in the Marvel Cinematic Universe. She alleges that an artificial intelligence app used her name and likeness in an online advertisement without her consent.
Last October, the app Lisa AI: 90s Yearbook & Avatar posted an ad featuring an old clip of Johansson and a fake voice imitating her to promote the app. The ad also showed AI-generated photos that resembled the Marvel actor.
The ad, which has since been removed, had a disclaimer at the bottom that read “Images produced by Lisa AI. It has nothing to do with this person.”
As one example of a tool to fight such deep fakes, Tseng pointed to various provinces that have codified aspects of the tort of appropriation of personality. As well, British Columbia has an act called the Intimate Images Protection Act that could be used to combat deep fakes, as it is broad enough to cover doctored images.
There are also intellectual property rights that include those to deal with copyright and trademark infringement and patent rights that protect intellectual property rights.
“Ultimately, these do intersect with deep fake technology,” Tseng said.
Bob Tarantino of Dentons Canada LLP told webinar attendees that the tort of misappropriation of personality “may not be the best tool” in combatting deep fakes, noting that “we’re still not completely clear on whether we are talking about something that is a privacy right or a personality right” when it comes to deep fakes.
“That can seem like an abstract distinction, but it has real implications in terms of the scope of the right and the nature of the right in terms of whether it survives somebody’s death, and whether it’s transferable and can be assigned to somebody else.”
Taking a tour across the country of the common law and statutory regime for claims that could be used for challenging deep fakes, excluding Quebec, Tarantino noted that only four provinces have statutes relating to personality rights.
Talking about several cases that used the misappropriation of personality tort since the 1970s, Tarantino said that in the majority, the plaintiffs didn’t win. “So misappropriation of personality is not necessarily a winning claim that a lot of people have advanced in Canada.” He also pointed out that the courts’ determination in these cases is that “you have to have some sort of public profile” to invoke a misappropriation of personality claim.”
Professor Carys Craig of Osgoode Hall Law School gave webinar participants an overview of whether copyright and intellectual property rights tools can challenge deep fakes.
She said that while there are tools within common law and statute law that can be used in deep fake cases, “what we’re doing is piecing together little pieces of the law that have some overlap . . . but are not really addressing the problem.” The question is whether there is still a gap that the law needs to fill.”
Craig also noted that how artificial intelligence is trained on existing texts and data to create a deep fake or voice clone could also be significant. “This is very much a live issue in copyright law - the extent to which training AI can itself produce liability or copyright infringement.”
When asked whether treating deep fakes as a property issue was the right approach, Craig said she didn’t believe it was.
The difficulty with approaching (deep fakes as a property right is that it tends to be alienable - or something you can give away by contract,” she said. “We all know that in the online environment, how easy that is to do when you click on terms of service and you’ve given away unrestricted license to use your likeness forever after.
“I much prefer a more nuanced understanding of privacy in the context of the power to control the way you are presented in public.”
Also, she added, technology is such today that it can present a voice that sounds like you, but isn’t you.”
How can we say that we own our voice and that it is unique and belongs to us when a technology can reproduce it?”
Craig also said that if the voice being reproduced is part of, say, an actor’s character in a movie, who actually owns that voice? It may actually belong to the film studio, which owns the copyright to the film.
“We have to think carefully about once you start allocating property rights, to whom do they belong. It’s not always the person that we want to protect.”
Still, while legal tools exist to fight deep fakes, Tarantino noted that they are being applied piecemeal to situations where they occur. “That works in a system where you can actually access the courts and decisions are made in a timely manner,” he said, noting that it can often take years to get a decision from the court.
“So, I’m not sure our system is responsive and reactive in the way that it needs to be for our default position to be, well, let’s just let various claims get sorted out in the courts, and we’ll figure it out on an ad hoc basis.”
He added: “Given the potential danger that [current technology] poses to individuals, this might be a situation where legislative intervention is required.”
Most people would agree that individuals should “be able to control whether their face gets put into a deep fake video where they are portrayed as saying things that they have simply not said,” Tarantino said that he’s “not clear about people having that statutory right in provinces like Ontario.”