The use case for artificial intelligence (AI) and machine learning in healthcare is becoming compelling as the world grapples with the global coronavirus pandemic. But AI has myriad applications in routine healthcare as well and will soon be used regularly throughout the medical field.
As the use of technology and AI proliferate across the healthcare industry, the inevitability of mistakes being made becomes a reality. The whole issue ultimately begs the question of who is liable when AI makes a mistake that is harmful to a patient.
There are no easy answers to this question, particularly in an industry as litigious as healthcare. A recent STAT article breaks the issue down to the AI algorithms used in a particular scenario. If an algorithm is developed directly by a medical facility, that facility would be responsible for any AI mistakes through the legal definition of enterprise liability.
Basically, if a healthcare facility uses AI and removes humans from the decision making process, they will be liable for any mistakes made. However, if the AI algorithm was purchased commercially, the issue becomes more complex. The concept of preemption could be invoked in this case, which means a vendor would lose some of its risk.
Preemption happens if state and federal laws conflict. In the case of the healthcare industry, if a product or technology isn’t mandated by state law, federal law would prevail. And that means the AI vendor would not be required to comply with state laws in this case, and the healthcare facility would be responsible for any problems related to an AI-based decision or diagnosis.
AI, from a litigation standpoint, is somewhat in limbo right now. That’s because AI algorithms are not considered static products, and vendors – let alone the FDA, cannot predict how they will perform in the future. AI algorithms are not currently classified as drugs or devices, and that distinction will influence how the FDA and the court systems handle their successes and failures moving forward.
To further complicate matters, many AI algorithms are being expedited for approval right now. The coronavirus pandemic is only speeding the way for anything that could facilitate healthcare decision making and treatment processes, including developing technologies like AI and machine learning.
The takeaway is that it’s still largely unclear who will be responsible for mistakes made by AI algorithms and software. Healthcare providers should absolutely make use of new technologies to augment and help them, particularly during times of crisis. But they should also be vigilant about AI outcomes and ensure humans are safe checking important decisions in the event AI makes a mistake.
To learn more about how AI technologies are transforming the healthcare industry and beyond, TMC is hosting the Future of Work Expo from February 9-12, 2021 at the Miami Beach Convention Center. The event will explore how AI and machine learning can improve healthcare and business applications, communications, collaboration, contact center and customer service, and marketing and sales experiences and initiatives.
Future of Work Contributor
Arria's NLG technology takes the burden of storytelling from data analysts by using artificial intelligence to turn data into narrative.
As cities begin to reopen and offices start their return to work plans, there is now a need for talent acquisition solutions to assist with this new w…
Total Connect NowSM Helps Customers Across a Variety of Vertical Markets Survive and Thrive in the COVID-19 Pandemic
Aruba, along with many of it's tech partners, is launching a series of solutions to enable businesses to more effectively define their "new normal," a…
Quanergy has partnered with Milexia to deliver AI-based LiDAR solutions to European organizations.