A Wyoming judge found that the lawyers had included fake case citations in a Walmart lawsuit
Two lawyers from US personal injury firm Morgan & Morgan face sanctions from a Wyoming federal judge after they included fake case citations hallucinated by an artificial intelligence program in a suit against Walmart.
One of the lawyers confessed to the act and apologized for what he described as an inadvertent mistake in court filings last week, according to Reuters. The case involved a supposedly defective hoverboard toy.
The judge has yet to decide whether the lawyers will be disciplined. This month, Morgan & Morgan issued an email to its roster of over a thousand lawyers reminding them that AI can generate fictitious case law and that using such information in court filings can result in termination.
However, the firm’s spokesperson did not respond to a request for comment on the Walmart matter. Walmart declined to provide comment.
Research conducted last year by Thomson Reuters revealed that 63 percent of lawyers have used AI in their work, with 12 percent saying that they utilized it regularly. Nonetheless, legal experts warned that generative AI has a tendency to invent facts given that models produce responses based on statistical patterns gleaned from big datasets and do not vet facts in those datasets.
Last year, the American Bar Association reminded members that lawyers must verify the information included in their court filings, specifying that the obligations extended to AI-generated statements.
“When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple,” said Andrew Perlman, Suffolk University's law school dean, said in a statement published by Reuters.
In June 2023, two New York-based lawyers were fined US$5,000 for citing fictitious cases in a personal injury case against an airline. Former Donald Trump lawyer Michael Cohen and his lawyer were also almost sanctioned for using fake case citations generated by Google AI chatbot Bard.
In November 2024, a lawyer was ordered to pay a US$2,000 penalty for citing invented cases and quotations in a wrongful termination suit filed in Texas. The lawyer was also required to participate in a course about using generative AI in the legal profession.
Last month, a supposed misinformation expert confessed to citing AI-invented citations in a case related to a deepfake parody former US Vice President Kamala Harris; the move shattered the expert’s credibility with the court, a Minnesota judge said.
According to University of Colorado law school professor Harry Surden, the issue is a lack of AI literacy among lawyers.
“Lawyers have always made mistakes in their filings before AI. This is not new,” he said in a statement published by Reuters.
He recommended that lawyers educate themselves on AI tools’ strengths and weaknesses.