PickPik

The Food and Drug Administration has no dedicated regulatory pathway for artificial intelligence-enabled medical devices, despite clearing 691 such devices. A comprehensive study revealed significant gaps in safety reporting and adverse events, including one death.

Research published in JAMA Health Forum found that 95.5% of FDA-cleared AI and machine learning medical devices failed to report demographic characteristics of their testing populations, whilst only 1.6% provided data from randomised clinical trials.

The study analysed all AI-enabled medical devices cleared by the FDA from September 1995 to July 2023, finding that 96.7% were approved through the 510(k) pathway, which has fewer safety reporting and postmarket surveillance requirements than more rigorous approval processes.

Postmarket surveillance revealed 489 adverse events affecting 36 devices, including 458 malfunctions, 30 injuries and one death. Additionally, 40 devices were recalled 113 times, primarily due to software issues that could affect patient care.

Most device summaries failed to report crucial information, including study design (46.7% provided no information), training sample sizes (53.3% unreported), and safety assessments (71.8% unreported). Only 28.2% of devices documented premarket safety assessments.

The research highlighted concerning gaps in bias assessment, with 91.3% of devices failing to evaluate performance across different demographic groups. Among devices that did report demographic information, racial and ethnic minority groups were often underrepresented in testing populations.

Most AI medical devices fell under radiology (76.9%) and cardiovascular medicine (10.1%) specialities, with devices primarily relying on retrospective observational studies rather than prospective clinical evidence.

The study found that devices with adverse events were significantly more likely to be recalled than those without, raising questions about the adequacy of current postmarket surveillance systems for detecting AI-specific safety issues.

Researchers have noted that the performance of AI algorithms can drift over time, particularly for non-locked models that continue to learn from new patient data, complicating traditional safety monitoring approaches.

The findings suggest an urgent need for dedicated regulatory frameworks specifically designed for AI medical devices, including more rigorous standards for study design, transparency requirements, and enhanced postmarket surveillance systems.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…