Respondents to a survey by estate planning tech company reported worries about errors, security
Many Canadians are hesitant to rely on artificial intelligence to produce legal documents, citing concerns about potential errors, security issues, and more, according to a new survey by estate planning company Willful.
More than half (54.7 percent) of the respondents to Willful’s survey said they would not trust AI to draft a legal document, compared to 19.2 percent of respondents who said they felt comfortable using AI for this task. Another 26.1 percent reported that while they didn’t feel they could currently rely on AI to produce legal documents, they were open to the possibility in the future as the technology improves.
Willful surveyed more than 500 respondents over one week in August.
Asked what types of legal documents they would currently trust an AI assistant to generate, more than three-quarters of respondents reported last wills and testaments. Other top choices were power of attorney documents (67.6 percent), mortgage or real estate agreements (54.9 percent), and employment contracts (53.9 percent).
Erin Bury, co-founder and CEO of Willful, says it’s understandable that most respondents do not trust AI to generate legal documents, given the nascent concept. But she also points to the “solid amount” of respondents who said they trust AI to do the job now.
The top concerns respondents cited for their hesitation were lack of human involvement (31.8 percent), errors or omissions (23.3 percent), security and privacy (22.6 percent), and the potential that AI-generated documents won't reflect their personal needs or situations (15.9 percent).
Bury said that companies using legal AI tech can proactively address some of these apprehensions. At Willful, for example, in addition to using AI tech, “all of our legal content is built in tandem with estate lawyers, we have human customer support agents, we have humans writing all of our educational content, and it's all reviewed and vetted by estate lawyers,” Bury says.
“I do think that right now, the key is to keep humans involved, both because we know that AI can have inaccuracies or can be trained on incorrect data and because we know that it's essential for people to trust that they're getting solid documents and getting solid information,” Bury adds.
While 17.2 percent of survey respondents said they didn’t see any way that AI could assist with legal issues, more (28.7 percent) reported that AI could help explain legal jargon and documents in simple terms. Fewer respondents were confident that AI could help them produce legal documents without a lawyer, answer legal questions, or answer legal questions in their native language.
Bury says that because so many people trust AI to break down legal language into simple terms, legal service providers have a “huge opportunity” to integrate AI into their business. Using AI in such a way could “not only inform people about the process and give them information that can help them decide on what goes into their document, but also explain those documents once they're drafted, whether they're drafted by AI or a system like Willful or a lawyer.”