News: 61% of hospitals using AI-systems do not evaluate models for bias, report suggests

CDI Strategies - Volume 19, Issue 9

U.S. hospitals are increasingly integrating AI predictive models with healthcare processes to predict health trajectories or risks, monitor health, recommend treatments, simplify or automate billing procedures, and facilitate scheduling. However, according to a study published in a recent edition of Health Affairs, many hospitals are not evaluating their tools internally for accuracy or potential biases.

The study noted that hospitals reporting local evaluation of their predictive models and AI were more often those that developed their tools in-house, rather than using an algorithm provided through their electronic health record (EHR) vendor’s platform. Hospitals relying on EHR developer–made models are less likely to locally evaluate their models and could be more susceptible to using inaccurate or biased models that harm patients, the researchers wrote.

Among a sample of over 2,400 hospitals surveyed in 2023, 65% reported using AI or predictive models. Of those that reported using AI or predictive models, 61% evaluate those models for accuracy using data from their own organization while only 44% do the same for bias.

The report determined that those using their own predictive models were higher-margin hospitals and hospitals in health systems.

“We found that critical access hospitals, other rural hospitals, and organizations serving high–Social Deprivation Index areas were less likely to use AI models,” the researchers wrote in the study. “This indicates that hospitals serving marginalized patient populations might not be able to access AI benefits at the same rate as hospitals serving more advantaged populations.”

Editor’s note: This article originally appeared in JustCoding. To read the Health Affairs article, click here.

Found in Categories: 
Clinical & Coding, News