While Artificial Intelligence (AI) can aid physicians in diagnosing patients, it also has drawbacks, such as distracting doctors, creating overconfidence, or causing them to lose confidence in their own judgment.
A research team has now provided a framework of five guiding questions to ensure AI is properly integrated to support patient care without undermining physician expertise. The framework was published in the Journal of the American Medical Informatics Association.
“This paper moves the discussion from how well the AI algorithm performs to how physicians actually interact with AI during diagnosis,” said senior author Dr. Joann G. Elmore, professor of medicine at the David Geffen School of Medicine at UCLA. “This paper provides a framework that pushes the field beyond ‘Can AI detect disease?’ to ‘How should AI support doctors without undermining their expertise?’ This reframing is an essential step toward safer and more effective adoption of AI in clinical practice.”
To understand why AI tools can fail to improve diagnostic decision-making, the researchers propose five questions to guide research and development. The questions ask:
- What type and format of information should AI present?
- Should it provide that information immediately, after initial review, or be toggled on and off?
- How does the AI show how it arrives at its decisions?
- How does it affect bias and complacency?
- And what are the risks of long-term reliance on it?
These questions are essential for several reasons:
- Format affects doctors’ attention, diagnostic accuracy, and possible interpretive biases.
- Immediate information can lead to a biased interpretation, while delayed cues may help maintain diagnostic skills.
- How the AI system arrives at a decision can highlight features that were ruled in or out and more effectively align with doctors’ clinical reasoning.
- When physicians lean too much on AI, they may rely less on their own critical thinking.
- Long-term reliance on AI may erode a doctor’s learned diagnostic abilities.
“AI has huge potential to improve diagnostic accuracy, efficiency, and patient safety, but poor integration could make healthcare worse instead of better,” Elmore said. “By highlighting the human factors like timing, trust, over-reliance, and skill erosion, our work emphasises that AI must be designed to work with doctors, not replace them. This balance is crucial if we want AI to enhance care without introducing new risks.”