Future of Work News Free eNews Subscription

AI in Healthcare: Who's to Blame When Things Go Wrong

By

The use case for artificial intelligence (AI) and machine learning in healthcare is becoming compelling as the world grapples with the global coronavirus pandemic. But AI has myriad applications in routine healthcare as well and will soon be used regularly throughout the medical field.

As the use of technology and AI proliferate across the healthcare industry, the inevitability of mistakes being made becomes a reality. The whole issue ultimately begs the question of who is liable when AI makes a mistake that is harmful to a patient.

There are no easy answers to this question, particularly in an industry as litigious as healthcare. A recent STAT article breaks the issue down to the AI algorithms used in a particular scenario. If an algorithm is developed directly by a medical facility, that facility would be responsible for any AI mistakes through the legal definition of enterprise liability.

Basically, if a healthcare facility uses AI and removes humans from the decision making process, they will be liable for any mistakes made. However, if the AI algorithm was purchased commercially, the issue becomes more complex. The concept of preemption could be invoked in this case, which means a vendor would lose some of its risk.

Preemption happens if state and federal laws conflict. In the case of the healthcare industry, if a product or technology isn’t mandated by state law, federal law would prevail. And that means the AI vendor would not be required to comply with state laws in this case, and the healthcare facility would be responsible for any problems related to an AI-based decision or diagnosis.

AI, from a litigation standpoint, is somewhat in limbo right now. That’s because AI algorithms are not considered static products, and vendors – let alone the FDA, cannot predict how they will perform in the future. AI algorithms are not currently classified as drugs or devices, and that distinction will influence how the FDA and the court systems handle their successes and failures moving forward.

To further complicate matters, many AI algorithms are being expedited for approval right now. The coronavirus pandemic is only speeding the way for anything that could facilitate healthcare decision making and treatment processes, including developing technologies like AI and machine learning.

The takeaway is that it’s still largely unclear who will be responsible for mistakes made by AI algorithms and software. Healthcare providers should absolutely make use of new technologies to augment and help them, particularly during times of crisis. But they should also be vigilant about AI outcomes and ensure humans are safe checking important decisions in the event AI makes a mistake.

To learn more about how AI technologies are transforming the healthcare industry and beyond, TMC is hosting the Future of Work Expo from June 22-25, 2021 at the Miami Beach Convention Center. The event will explore how AI and machine learning can improve healthcare and business applications, communications, collaboration, contact center and customer service, and marketing and sales experiences and initiatives.


 




Edited by Maurice Nagle
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor

SHARE THIS ARTICLE

Related Articles

4 Key GFI Products Now Powered by AI

By: Greg Tavarez    4/23/2024

GFI announced the integration of its CoPilot AI component into four of its core products.

READ MORE

A Winner's Mindset: Alan Stein Jr. Helps Businesses Build Winning Teams

By: Alex Passett    4/22/2024

At SkySwitch Vectors 2024 in downtown Nashville, Tennessee, last week, the keynote speaker was Alan Stein Jr. He stylishly presented to the Vectors au…

READ MORE

Atomicwork and Cohere Partner on AI-Powered Workplace

By: Greg Tavarez    4/22/2024

Atomicwork launched its innovative digital workplace experience solution, co-developed with Cohere.

READ MORE

Hybrid Work Fuels Demand for SASE, Zero-Trust Security

By: Greg Tavarez    4/16/2024

Around 80% of respondents said hybrid work is driving the need for SASE and zero-trust networking tools, according to an Aryaka report.

READ MORE

Akooda Announces New AI-Powered Enterprise Search Platform

By: Tracey E. Schelmetic    4/15/2024

Operations intelligence solutions provider Akooda recently announced its AI-powered Enterprise Search platform, which it noted was designed to allow e…

READ MORE