Friday, March 29, 2024
HomeBusiness of MedicineWhen Could You Be Sued for AI Malpractice? You're Likely Using It...

When Could You Be Sued for AI Malpractice? You’re Likely Using It Now

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day ― you may just not realize it.

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of healthcare, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and VP of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in healthcare for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in healthcare.”

Today, most AI applications in healthcare use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day ― sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.

  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.

  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.

  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.

  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.

  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.

  • Some systems within EHRs use AI to indicate high-risk patients.

  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.

  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.

  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.

The growth of AI in healthcare has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the Department of Medical Ethics and Health Policy and Medicine at the University of Pennsylvania in Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”

What Are the Top AI Legal Dangers of Today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the US Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT healthcare expert, and author of the 2023 book, ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by healthcare professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into healthcare.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnosis, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure the diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in healthcare,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”

Upcoming AI Legal Risks to Watch For

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how healthcare professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Boisvert. He noted a website that highlights complaints and accusations of AI bias, including in healthcare.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for notusing AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Jha said, “for example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Parikh, who co-authored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”

How You Can Prevent AI-related Lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI ― particularly for direct healthcare ― should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in healthcare, LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for healthcare professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in healthcare, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube

RELATED ARTICLES
- Advertisment -

Most Popular