News Release

Scientists argue for more FDA oversight of healthcare AI tools

Peer-Reviewed Publication

PLOS

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

 Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behavior can shift in unpredictable ways once they’re in use.

 In the new paper, Celi and his colleagues argue that the FDA’s current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

 “This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centered, risk-aware, and continuously adaptive regulatory approach—one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities,” the authors say.

#####

In your coverage please use this URL to provide access to the freely available article in PLOS Digital Health: https://plos.io/3HgQkja

Citation: Abulibdeh R, Celi LA, Sejdić E (2025) The illusion of safety: A report to the FDA on AI healthcare product approvals. PLOS Digit Health 4(6): e0000866. https://doi.org/10.1371/journal.pdig.0000866

Author Countries: Canada, United States

Funding: The author(s) received no specific funding for this work.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.