News: AI overuse undermines young doctors’ critical thinking, study finds
Generative artificial intelligence (AI) offers benefits in medicine but poses risks to medical trainees, including skill loss and outsourcing reasoning that can potentially undermine clinical competence and patient safety, according to a recent article published in BMJ Journal.
Research cited in the article shows AI usage is widespread among students, while in most cases institutional policies remain inadequate. The study authors wrote that an ideal policy should clearly define acceptable and unacceptable uses of AI for each task, including studying, academic assignments, and clinical documentation. It should also prohibit the entry of protected health information into commercial AI tools for data analysis.
In an interview with Medscape Medical News, Fares Alahdab, MD, an associate professor of biomedical informatics, biostatistics, epidemiology, and cardiology at the University of Missouri School of Medicine and one of the authors, said, “Most of the early literature and enthusiasm surrounding generative AI in medicine has emphasized its advantages, while drawbacks have largely been treated as a secondary issue or framed as a generic caution.”
The study examined the potential risks associated with the use of generative AI in medical education and identified six risk categories:
- Automation bias
- Outsourcing of reasoning
- Loss of skills
- Racial and demographic biases
- Hallucinations, defined as false information presented with confidence
- Data privacy and security concerns
The study authors state that of these risks, the loss of skills is the most concerning. Without the mental models, pattern recognition, and reasoning habits experienced physicians develop over years of practice, usage of AI can halt the process of building these competencies in medical students.
Outsourcing of reasoning is another risk they identified, as the fluent, polished responses produced by AI can lead its users to “abandon independent information seeking, critical appraisal, and knowledge synthesis,” Over time, this results in the deterioration of skills that should be continuously reinforced.
“A red flag is when a student can no longer explain a concept, a differential diagnosis, or a treatment plan in their own words without first checking what the AI thinks,” Alahdab said. “Incorporating regular periods of study and self-assessment without AI is a simple way for students to monitor whether their own reasoning remains intact.”
The study authors recommend that students first attempt to complete tasks independently and then turn to AI to compare, analyze, and refine their work as needed. They also propose a paradigm shift in assessing students, asking them to demonstrate their reasoning processes rather than focusing solely on the final output. Students should include a history of interactions with AI, justifications for accepting or rejecting its suggestions, and the verification steps they took using primary source data.
Editor’s note: To read Medscape Medical News’ coverage of this story, click here. To access the BMJ Journal article, click here.
