Guest post: Why isn’t CMI a reliable metric?

CDI Blog - Volume 11, Issue 81


Howard Rodenberg,
MD, MPH, CCDS

By Howard Rodenberg, MD, MPH, CCDS

We use lots of metrics to describe the success or failure of a CDI program. We talk about numbers of charts reviewed, response rates, agreement rates, audit denials overturned, the financial impact of queries as measured by revenue changes and DRG shifts, and any number of other descriptive statistics. These are all solid and valid measures of the results of a CDI effort.

Given that I have no math aptitude whatsoever, it’s still a mystery why I’m so fascinated by statistics. I’m serious about the no math thing; I’m not being self-deprecating. The last math class I took was trigonometry in high school, and that was essentially plotting connect-the-dot puzzles in class and getting credit for it. I even put off high-school physics until my last semester when I was already accepted to college and it wouldn’t show up on my grade point average if I failed. All that being said, I had a lot of fun running Hot Wheels cars down inclined planes into tin can lids swinging from strings, so the spring of 1980 wasn’t a total waste. (For the record, I put off physical education until that last year for the same reason.) Still, I find statistics intriguing, like a puzzle to figure out what they mean.

We use lots of metrics to describe the success or failure of a CDI program. We talk about numbers of charts reviewed, response rates, agreement rates, audit denials overturned, the financial impact of queries as measured by revenue changes and DRG shifts, and any number of other descriptive statistics. These are all solid and valid measures of the results of a CDI effort.  But of all these measures, the case mix index (CMI) seems to matter most. It’s the single common currency of understanding between CDI, coding, and administration. Understanding it allows us to explain to administrators what’s really going on when the CMI doesn’t meet their expectations.

I started thinking more about this as I tried as I tried to reconcile the strong financial results of our query process with what I considered a minimal change in CMI. Our data clearly showed we were having a sizeable effect, so I wondered why the CMI wasn’t noticeably rising as well.

Experienced CDI professionals already know there are a multitude of reasons not to trust your CMI. CMI changes with variations in patient population, season, provider and service availability, and CDI staff productivity—just to name a few. (In fact, I recall a question on the CCDS prep test asking if your CMI could drop if all your orthopedic surgeons went to a meeting.) But what puzzled me was why a clear financial impact from the query process was not translating into a noticeable change in CMI.

The answer, it turns out, is in the nature of statistics; or, more importantly, what we choose to do with them in context. There’s a great little book from the 1950s by Darrell Huff called How to Lie With Statistics, and I encourage everyone to obtain a copy. In it, Huff explains how metrics and statistics can only be interpreted in context. For example, a statistic might say that 50% of boys prefer corn chips with their school lunch, but if your school has 98 girls and two boys in the total class size, the preferences of 1% of the population is a poor way to make decisions.

Similarly, what happens with CMI is all about context. Let’s say that your CDI program issued 50 queries with financial impact in a given month, taking the CMI for that group of 50 records from 1.0 to 2.0. Let’s also assume that all your patients are covered by Medicare with a base rate of $6,000. The financial impact of your 50 queries is $300,000. Pretty nice chunk of change for a month’s work, right?

(Why am I using made-up whole numbers? To make this easy. See earlier paragraphs .)

Now let’s look at the CMI. The overall CMI is calculated and interpreted not in the context of the number of charts with queries of financial impact, but in the context of the entire patient population. Again, to make the math easy, let’s say the CMI of your entire population is 1.0 before the queries, and you’ve got 1,000 admissions reviewed each month. Taking into account these 50 charts which are now at a CMI of 2.0, your overall CMI has increased to a mere 1.05.

How did we do that? It helps me to think of the concept of relative weights (RW) as “points.” If you have 100 charts with a CMI of 1.0, you have 100 RW “points” for that group of charts. So to start with, you’ve got 1,000 charts with a CMI of 1.0. This is 1,000 RW “points.”

Now we recalculate after the queries. Remember we have 50 charts which now have a collective CMI of 2.0 Our total of RW “points” now is 1,050 (950 points from the 950 charts that have a CMI of 1.0 and 100 from the 50 charts at 2.0). Dividing that number of “points” by the total number of records gives you a new CMI of 1.05. It looks unimpressive and hard to tout as a stunning success to the bean-counters on high, but it is an accurate reflection of the effect of those queries on the entire population. And, it’s a significant reason why it’s crucial to look for other, more focused measures of a CDI program’s effect.

It’s math simple enough that even I can get it. And that’s saying something.

Editor’s note: Rodenberg is the adult physician advisor for CDI at Baptist Health in Jacksonville, Florida. Contact him at howard.rodenberg@bmcjax.com. Advice given is general. Readers should consult professional counsel for specific legal, ethical, clinical, or coding questions. Opinions expressed are that of the author and do not represent HCPro or ACDIS.

 

Found in Categories: 
ACDIS Guidance, Policies & Procedures