News: High use of unapproved ‘shadow AI’ among clinicians, survey finds

CDI Strategies - Volume 20, Issue 8

About 41% of respondents, including providers and administrators, reported they are aware of colleagues that use artificial intelligence (AI) tools the organization has not approved, according to a new survey conducted by Wolters Kluwer. Among the same group, 17% admit they’ve used such AI, termed “shadow AI” by experts, HealthLeaders reported.

When asked their reasons for using unauthorized AI, 45% of providers say the tools create faster workflows, 27% say the tools either offer better functionality or management isn’t offering them tools yet, and 26% say they’re curious or want to experiment with the technology.

Of those who do not use unauthorized AI, 41% of administrators and 35% of providers said they’re very familiar with their organization’s AI policy and follow the rules closely. Both providers and administrators listed patient safety as the top AI risk, though 94% of administrators and 80% of providers either agree or strongly agree that AI will significantly improve healthcare in the next five years.

A white paper accompanying the survey recommends six steps to address the use of shadow AI:

  1. Develop clear policies on AI use
  2. Foster collaboration between policy decision-makers and users
  3. Identify purpose-built AI tools that support enterprise-wide security and goals
  4. Clearly communicate AI policies and provider training sessions
  5. Provider broader training on AI literacy
  6. Continue to monitor for uses of shadow AI and gather feedback

“Doctors and administrators are choosing AI tools for speed and workflow optimization, and when approved options aren’t available, they may be taking risks,” said Yaw FellinSVP and General Manager of Clinical Decision Support and Provider Solutions for Wolters Kluwer Health which ran the survey, in a press release. “Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns. Leaders must act now to close the policy gap around AI use, develop clear compliance guidelines, and ensure that only validated, secure, enterprise-ready AI tools are used in clinical care.”  

Editor’s note: To read HealthLeaders’ coverage of this story, click here. To access the survey, click here.

Found in Categories: 
CDI Management, News