Guest Post: Finding a definition for failed CDI programs

CDI Blog - Volume 4, Issue 26

by Donald A. Butler

In March, I started a conversation on CDI Talk entitled “Failed Programs,” hoping at the time that there might be someone willing to divulge a first-hand account of how and why their program “failed” and perhaps how they were able to “save” or “reinvent” it. I was hoping to gather enough information develop an article on the topic for the CDI Journal.

While the title of the discussion generated quite a bit of conversation (there were upwards of 36 responses at the time), no volunteers came forward. Unfortunately (or maybe fortunately), I don’t have any first-hand experiences with a “failed”, nor do I have any personal reflections to share from direct colleagues. Furthermore, the online discussion on CDI Talk helped me realize there is not a clear definition for what might be considered a “failed program” in the first place.

I understand this is a very sensitive subject. There might be real reluctance to participate in such a discussion depending on an individual’s experiences. Revealing serious struggles might risk erroneous implications about a present program and not some previous or anecdotal one. In my (humble) opinion, however, recognizing program problems can help us seize a genuine “opportunity for improvement.” (I’m not a fan of that phrase, by the way, thus the quotes. Am I the only one who dislikes it?)

But maybe even better than an individual program finding potential success amidst the rubble of seemingly insurmountable obstacles is the possibility that together we can all learn something from each others’ schools of hard knocks.

So, I request input (100% private and confidential) from anyone who might be willing to share their experiences of a CDI program that has either failed or come close. With some good input from our professional community, I believe there will be enough information to provide an article with some great insights into pitfalls and risks, strategies for success, and methods to rebuild.

For now, let’s focus this conversation around the variations of “failed” programs and think about potential underlying causes. Before we can consider failures, maybe we should outline what the industry has come to view as CDI program standards and basic functions. To help provide a framework for my reflections, please review these two quotes from the AHIMA Guidance for Clinical Documentation Improvement Programs (May 2010):

“The focus of most CDI programs is on improving the quality of clinical documentation regardless of its impact on revenue. Arguably, the most vital role of a CDI program is facilitating an accurate representation of healthcare services through complete and accurate reporting of diagnoses and procedures.”

And:

“A successful CDI program can have an impact on CMS quality measures, present-on-admission conditions, pay-for-performance, value-based purchasing. The documentation in the medical record becomes data that is used for decision making in healthcare reform, and other national reporting initiatives. Improving the accuracy of clinical documentation can reduce compliance risks, minimize a healthcare facility’s vulnerability during external audits, and provide insight into legal quality of care issues. In a successful program, the CDI professional works to facilitate the overall quality and completeness of clinical documentation to accurately represent the severity, acuity, and risk of mortality profile of the patient being treated.”

I also encourage review of the ACDIS White Paper “What Every CDI Program Needs to Succeed is Structure, Staff, Process,” by Lynne Spryszak, RN, CPC-A, CCDS, CDI education director for HCPro, Inc., in Danvers, MA.

So, without further ado, here are some thoughts I had on defining “failed” or “under-performing” CDI programs.

A “failed” program is one which:

  • Completely ceases to exist due to:
    • Elimination or cancellation by the organization either as a cost saving measure or perceived/actual lack of performance of the program.
    • Staff departures, which prevent long-term viability/sustainability. This might reflect a program where success is based on individual performance rather than on the CDI process. Also, smaller programs are likely at higher risk where the loss of one or two team members can eliminate the program’s ‘institutional memory.’
    • Some fault or error surrounding initial implementation, program design, or inadequate support.
    • Lack of sufficient staffing where the devoted resources are inadequate or not supported.
  • Significantly misses performance targets (or metrics) where
    • Targets are undefined and/or internal benchmarks are not established
    • Metrics are not rigorously reviewed for accuracy, shared, and/or applied to maintain focus and potential growth
    • Targets are unrealistic (may be either internally set or established by consultants)
    • Metrics are focused primarily on financial impact
    • Metrics are appropriate but goals and findings are not shared with CDI specialists
    • Benchmarks and reporting are efficient but findings are not used as tools for CDI staff or physician education and feedback
  • Eliminates or transfers staff due to a perceived lack of success or financial hardships
  • Lack of medical staff engagement, collaboration, and support due to:
    • Ineffective communication of the CDI program mission
    • Lack of administrative support that encourages medical staff partnership in the CDI program.
    • Uncooperative medical staff/organizational relationship
  • Has been deemed to have failed by an external (consulting) group due to:
    • Analysis of performance, metrics, or focus (based on the consultant’s standards); the program from the consultant’s perspective doesn’t measure up
    • A change in consulting relationships from one firm to another, especially where there are differing philosophies and goals
    • A change in CDI program focus

Of course, there are those programs which have not “failed” but which we may consider less than successful. What are the indicators to watch for in those cases? In my opinion, a less-than-successful CDI program is one that:

  • Exhibits continuous staff turnover and/or is chronically understaffed due to
    • Inappropriate training
    • Insufficient employee screening, interviewing, and assessment
    • Inadequate administrative and executive support
  • By design, focuses only on one area of CDI activity, which results in lack of improvement in other (ignored) areas. For example:
    • Preventing or refuting RAC denials (as would be seen with an effective multi-spectrum CDI program)
    • Facility and physician profiling data and reports (such as risk of mortality, length of stay, PEPPER report, core measures, etc.)
  • Executives and/or organizational leaders express dissatisfaction due to:
    • Lack of appropriate leadership education and effective reporting of program successes
    • Ineffective CDI leadership/management
    • Unsatisfactory attempts to “win over” facility leadership
  • Lacks involvement with other organizational projects and initiatives or preparatory efforts such as:
    • The development and revision of various clinical and medical record forms 
    • Development, implementation, and ongoing integration of electronic medical records
    • Review of records for quality concerns
    • Review of records for clinical best practices
    • Review and analysis of financial forecasts and CDI program impact (with particular focus on impacts of coding and documentation changes)
    • Physician education (including residents)
    • ICD-10 planning and preparation at the level of steering committee (or other major changes).
    • Value Based Purchasing
  • Exhibits interdepartmental hostilities between CDI and HIM due to lack of
    • Clearly defined roles and responsibilities
    • Adequate management interaction
    • Appropriate leadership chain of command
    • Inclusion of various team members in CDI processes
    • Smooth, integrated process flow toward a common goal of accurately coded complete medical record
  • Inadequately prepares, pursues, applies, and completes performance improvement activities such as:
    • Benchmarking
    • Metrics
    • Analysis
    • Staff education
    • Focused educational projects (by service, physician group, clinical topic)

Looking back over all these items, it seems like a daunting task to be successful. The program with enough resources to actually meet all of these (let alone do them well) may well be a leader in CDI program best practices. Maybe I am being a bit too demanding, but it seems to me that just taking one or two items off this list at a time, could generate solid long-term improvements.

I am sure other CDI professionals have other thoughts about what makes a program fail, so send them in. If you think my thoughts here are off-base, or if you think there are obvious things I’ve missed, please let me know—I very much want feedback!

Editor's note: Butler entered the nursing profession in 1993, and served 11 years with the US Navy Nurse Corps in a wide variety of settings and experiences. Since CDI program implementation in 2006, he (at the time of this article's original release) has served as the Clinical Documentation Improvement Manager at Vidant Medical Center.

Found in Categories: 
ACDIS Guidance, CDI Management