Experts are cautioning that the integration of artificial intelligence in healthcare may lead to a legally intricate blame game when determining responsibility for medical errors.
The field of AI for clinical applications is rapidly advancing, with researchers developing an array of tools, from algorithms for scan interpretation to systems for assisting in diagnosis. AI is also being designed to improve hospital operations, such as enhancing bed utilization and addressing supply chain issues.
While specialists acknowledge the potential benefits of this technology in healthcare, they express concerns regarding insufficient testing of AI tools’ effectiveness and uncertainties about accountability in cases of negative patient outcomes.
“There will undoubtedly be situations where there’s a perception that something has gone awry, and people will seek someone to blame,” remarked Derek Angus, a professor at the University of Pittsburgh.
The Journal of the American Medical Association hosted the Jama Summit on Artificial Intelligence last year, gathering experts from various fields, including clinicians, tech companies, regulatory bodies, insurers, ethicists, lawyers, and economists.
According to the report of results, of which Angus is the lead author, the publication discusses the nature of AI tools, their application in healthcare, and the various challenges they present, including legal implications.
Co-author Glenn Cohen, a Harvard Law School professor, indicated that patients might find it challenging to demonstrate negligence concerning AI product usage or design. Accessing information about these systems can be difficult, and proposing reasonable alternative designs or linking adverse outcomes to the AI system may prove unwieldy.
“Interactions among involved parties can complicate litigation,” he noted. “Each party may blame the others, have pre-existing agreements redistributing liability, and may pursue restitution actions.”
Michel Mello, a Stanford Law School professor and another report author, stated that while courts are generally equipped to handle legal matters, the process can be slow and create early-stage mismatches. “This uncertainty increases costs for everyone engaged in the AI innovation and adoption ecosystem,” she remarked.
The report also highlights concerns regarding the evaluation of AI tools, pointing out that many fall outside the jurisdiction of regulatory bodies like the U.S. Food and Drug Administration (FDA).
Angus commented, “For clinicians, efficacy typically translates to improved health outcomes, but there’s no assurance that regulators will mandate evidence.” He added that once an AI tool is launched, its application can vary widely among users of differing skills, in diverse clinical environments, and with various patient types. There’s little certainty that what seems advantageous in a pre-approval context will manifest as intended.
The report details numerous obstacles to evaluating AI tools, noting that clinical application is often necessary for thorough evaluation, while current assessment methods can be prohibitively expensive and cumbersome.
Mr. Angus emphasized that investing in digital infrastructure is crucial and that adequate funding is essential for effectively assessing AI tools’ performance in healthcare. “One point raised during the summit was that the most respected tools are often the least utilized, whereas the most adopted tools tend to be the least valued.”
Source: www.theguardian.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.