AI's role in medical negligence cases

Charles Gluckstein on risks and rewards as technology transforms the healthcare system

AI's role in medical negligence cases
Charles Gluckstein, managing partner of Gluckstein Lawyers

This article was produced in partnership with Gluckstein Lawyers

We’ve reached a point where robotics entering the human healthcare system is no longer the stuff of science fiction. At this stage of technological advancement, we could soon be resolving brain injuries by hooking ourselves up to a Neuralink, fixing people genetically disposed to certain bad outcomes via CRISPR, putting in a bionic eye, or having a robot provide 24/7 surveillance to an injured person. But with this potential comes significant risk, says Charles Gluckstein.

“We’re on a path of improving outcomes in healthcare, with the use of AI leading to fewer medical mistakes,” says Gluckstein, managing partner of Gluckstein Lawyers. “But if it’s used as a short cut, it will actually increase mistakes and lead to more litigation for practitioners if they’re not careful.”

Operational efficiency, better patient outcomes, and improved healthcare experience

With the younger generation of healthcare professionals coming up trained on AI and VR headsets, for example, there is a gap in training and experience that could prove problematic. Gluckstein likens it to the shift to digital records where nurses’ crib notes on patients’ charts were no longer enough, and they were required to transfer that data into the computer. He recalls a case the firm had where a NICU baby became hypoglycemic and suffered a seizure disorder because the nurses hadn’t checked the computer to determine whether the baby’s eating schedule was on track. The issue was a lag in adapting to the new process where the digital records weren’t used properly.

Currently, doctors are using AI for greater efficiency — to summarize interactions with patients for the chart and to help draft letters back to referring doctors. They are also relying on it to assist with differential diagnosis where the doctor may not have thought of something more subtle, or the checklist to cancel out a possible differential is more readily available. The next step is potentially using AI to identify drug interactions, provide customized pharmacare based on an individual patient, or even record what’s going on in an operating room, much like police officers are now required to wear body cams. Surveillance provides a video record as opposed to relying on reports that may not be complete, accurate, or contemporaneous.

On the patient side, there’s a shift to “medicine 3.0” that focuses on prevention and health monitoring, such as using wearables to measure things like blood pressure or glucose levels. This is a gamechanger particularly for people in remote communities, for example. Technology is also allowing for innovations around 3D printing of organs and other body parts, robotic surgeries that are less invasive and have a quicker recovery, gene editing tools, and increasingly effective and timely vaccines.

“These are all examples of where it should go — it should help in operational efficiency, getting patients better outcomes, and improving society’s experience with healthcare overall,” Gluckstein says.

There are also ethical issues to consider, including doctor patient confidentiality in the case of surveillance, but more broadly there’s the concern over AI’s programming. If it’s making predictions about a certain type of medicine or a diagnosis, for example, what’s the sample size? Is it from a rural or urban community? An impoverished segment? Was it racially determined?

“You get all sorts of biases built into the algorithm that is then set in the programming going forward,” Gluckstein says. “Relying on it can result in one-sided or very bad outcomes, making it a significant consideration as things continue to evolve and new problems emerge.”

AI’s impact on medical negligence cases

As fast as technology advances, the analysis for a medical malpractice case remains unchanged: did the medical professional breach the standard of care, and but for that breach, would the outcome have been the same?

As experts use these tools to help with differential diagnosis, keeping records, instructing staff, delegating tasks, or potentially to provide surveillance, “all of that enters into our deep dive as to whether a breach occurred,” Gluckstein notes. Was the doctor trained properly in robotic surgery, for instance? Did the doctor use it effectively? Experts on those specific techniques can be brought in to support or refute the issue at hand, as is standard procedure in these cases.

“The fundamental process hasn’t changed, and technology may ultimately provide us with more information,” Gluckstein says, pointing to the development of electronic fetal heart rate monitoring that has led to much more available data in birth trauma cases, where before lawyers relied on things like auscultation records from midwives, as an example.

“The legal issues will still be there, but we’ll have other tools to analyze them,” Gluckstein sums up.

Overall, the same caution he gives to lawyers about using AI, he’d give to doctors and other health care professionals: as much as these tools are helpful and speed up tasks like review of records, it’s still a delegated responsibility.

“The medical professional, like the lawyer, who signs his or her name to a report must ensure its accuracy — they can’t just press send,” he says. “It’s extra effort in the beginning, but as AI and the professional learn to work better together, like working with a new colleague, the trust level increases as time goes on.”

Gluckstein Lawyers is committed to staying on top of the latest developments, discussing cutting-edge innovations, future possibilities, and potential impacts of technology and more during its annual Risky Business Conference. For more information and insight from the firm, visit its Blog & News page.