healthcare and ai
Photo credit: StockCake

A comprehensive summit report published in JAMA has detailed the wide-ranging effects of artificial intelligence across health care delivery, outlining fundamental challenges in evaluating, regulating and monitoring AI tools whilst proposing systemic solutions requiring collaboration across multiple stakeholders.

The JAMA Summit on AI brought together developers, health care systems, payers, regulators and patients to examine how AI should be developed, evaluated, regulated, disseminated and monitored. The resulting report, published in the peer-reviewed medical journal established in 1883 by the American Medical Association, categorises health care AI into four distinct types: clinical tools used by physicians, direct-to-consumer applications, business operations software, and hybrid systems serving multiple functions.

The authors identified that many AI tools already see widespread adoption, particularly in medical imaging where 90 per cent of US health care systems have deployed some form of AI. However, the report documented substantial gaps in evaluation and oversight, noting that most tools enter practice with limited assessment of their health effects.

“AI will massively disrupt health and health care delivery in the coming years. Given the many long-standing problems in health care, this disruption represents an incredible opportunity. However, the odds that this disruption will improve health for all will depend heavily on creation of an ecosystem capable of rapid, efficient, robust, and generalizable knowledge about the consequences of these tools on health,” the authors concluded.

Derek C. Angus and colleagues examined why comprehensive evaluation remains challenging, describing how AI tool effectiveness depends heavily on human-computer interfaces, user training and deployment settings. A sepsis alert system’s performance, for instance, varies considerably between emergency departments and hospital wards, between community and teaching hospitals, and based on workflow integration.

The report detailed how current regulatory frameworks struggle with AI’s breadth and rapid evolution. The US Food and Drug Administration has cleared over 1,200 AI-enabled medical devices, predominantly in imaging, but many tools fall outside its jurisdiction. The 21st Century Cures Act exempts software providing administrative support, general wellness functions, certain clinical decision support, and various electronic health record functions from medical device classification.

Direct-to-consumer AI tools represent a market exceeding $70 billion annually, with more than 350,000 mobile health applications available. Three in 10 adults worldwide have used such applications, yet most avoid regulatory requirements by positioning themselves as low-risk general wellness products. The report noted that whilst some insurers encourage these tools through subscription coverage, most usage remains unreimbursed.

Business operations AI tools are rapidly being purchased by health care systems to improve efficiency and operating margins, including software for bed capacity optimisation, revenue cycle management, scheduling and supply chain management. The report highlighted that consequences for patients remain poorly understood, as these tools require no evaluation or regulatory review despite potentially large effects on health care access and quality.

The authors proposed four strategic priorities for responsible AI development and dissemination. First, multistakeholder engagement throughout the total product life cycle, moving beyond traditional sequential pathways where developers, regulators and health care systems operate separately. Second, development and implementation of proper measurement tools for evaluation and monitoring, including novel methods enabling health care systems to conduct rapid, efficient evaluations of effectiveness beyond current safety-focused approaches.

Third, creation of nationally representative data infrastructure and learning environments to support generalisable knowledge generation about AI tool effects across different settings. The report suggested this could function as a multicentre learning health system collaborative, requiring federal support comparable to the Health Information Technology for Economic and Clinical Health Act, which drove electronic health record adoption to over 97 per cent of health care systems within a decade through $35 billion in federal investment.

Fourth, establishment of appropriate incentive structures using market forces and policy levers to align stakeholder interests, as market forces alone may not guarantee optimal development and dissemination.

The report examined workforce implications extensively, noting that AI tools will change which health care professionals execute which tasks, require foundational AI understanding in training and continuing education, and necessitate ensuring equitable access whilst avoiding deployment of potentially harmful tools in settings poorly equipped to detect problems.

The authors addressed ethical and legal considerations including data rights, privacy and ownership questions that existing legislation does not fully resolve. The report also examined whether AI tool deployment constitutes quality improvement or human subjects research, affecting institutional review board oversight requirements.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…