Hallucinations are no longer a valid excuse for lawyers' AI avoidance, says legal tech expert

Thomas Martin says AI tools are now reliable enough for legal practice – lawyers just need to keep up

Hallucinations are no longer a valid excuse for lawyers' AI avoidance, says legal tech expert
Thomas Martin

For the past couple of years, the risk of hallucinations – the tendency of generative AI tools to invent facts – has been a leading reason lawyers have shied away from adopting AI. But that excuse no longer holds water.

This is according to Thomas Martin, founder and CEO of legal tech company LawDroid and consulting firm Deep Legal Consulting.

“There are definitely ways to mitigate hallucinations and employ AI reliably within a law firm context,” Martin says.

As a lawyer himself, Martin understands the profession’s hesitation: “We have fiduciary duty, but also, we are perfectionists just by personality type, and we want to get it right.”

The AI reliability pyramid

Martin describes the evolving reliability of AI tools as a pyramid.

At the base lies the casual use of models like ChatGPT through simple, unstructured prompts. One step up is prompt engineering – learning to ask better questions for more accurate results. “This gives you even more accurate information because AI knows what you're asking for,” he says.

Further up the pyramid is retrieval-augmented generation (RAG), which connects the AI model to a verifiable external source of truth. The key, Martin notes, is determining the quality of that source. “If the source is internet, I would take it with a grain of salt. However, if it's a legal AI vendor, our bread and butter is reliant upon us providing high-quality information to attorneys, so we would likely connect it to a source of truth, like actual case opinions or actual statutes … that have been vetted by other lawyers,” he says.

At the top of the pyramid is a technique called memory tuning. Here, the model is trained on a high-quality dataset to internalize its language and facts – going beyond just referencing external sources.

Using low-rank adapters, the AI can “memorize certain facts at very low latency,” Martin says. He says this method allows for faster, more efficient, and highly accurate legal outputs.

Denial and delay

Martin believes the real challenge lies not in technology but in lawyers’ attitudes. “It's always up to them whether or not they're going to do it. But... it would be a very poor choice not to become educated” on AI, he says.

He adds that too many lawyers assume AI is still not ready for legal use. “There's denial, and there's also a lack of education,” Martin says. “They fail to understand where AI is currently and that it is production ready.”

The danger of staying on the sidelines, he says, is getting left behind. “They're going to get outcompeted. They just don't realize it or don’t want to face that truth.”

Martin used to believe that AI would augment lawyers’ work, freeing them from repetitive tasks. Now, he sees something more fundamental happening.

“There’s a reality check… AI has gotten so good that there’s a lot of tasks and functions that lawyers have traditionally done that it is just going to do.”

That doesn’t mean lawyers will vanish entirely. Instead, Martin foresees the end of the “unitary legal artisan” – the lawyer who handles every part of a matter from start to finish. The profession, he says, will splinter into new roles.

“There’s going to be many more types of professional roles within the legal profession,” he says.

Martin envisions law firms embedding real-time legal risk systems into client companies, enabling lawyers to proactively monitor and respond to issues before they escalate. He argues this will create new business opportunities for law firms and reduce client risk.

Fewer lawyers – or just different ones?

Martin is cautious about predicting a drastic drop in lawyers, but he believes fewer will be needed to do the same amount of work.

“If you have an international law firm that has 5,000 lawyers, you probably could have the same effect with 2,000 lawyers that are using AI.”

Still, he adds that new AI-driven services might create new demand. He likens the situation to ATMs and bankers: “The need for bankers initially dipped, but then it actually ended up creating more demand.”

In Martin’s view, the lawyers most at risk aren’t defined by age but by mindset.

“The worst age to be right now as a lawyer – or actually anyone – is to be at an age where you think old, where you think close-minded,” he says.

However, lawyers in their 30s who resist technology may be in more trouble than older colleagues nearing retirement.

“If you're 30 years old and you're saying, ‘I don't like this technology,’ unfortunately, that's the world we're living in.”

Smaller firms have the edge

Martin believes smaller firms are better positioned when it comes to embracing AI. “I 100 percent think that the advantage goes to the smaller, nimbler players,” he says.

He says that large firms may have deeper pockets, but they also come with “institutional baggage” – layers of bureaucracy, rigid billing models, and partner-driven structures that can stifle innovation.

However, if decision-makers in large law firms see how AI might allow for closer client relationships and new revenue models, he adds, “I think they would fight less against it.”

While the transition may reduce billable hours in the short term, “humans are clever,” he says. “We’re going to find ways of surviving and making money.”