I n this issue of the Journal of Graduate Medical Education, an article entitled ‘‘Successful resident engagement in quality improvement: the Detroit Medical Center story’’ by Hussain et al presents an interesting foray into the world of payfor-performance. The authors focused on a residentdriven pay-for-performance initiative that targeted venous thromboembolism (VTE) prevention and stroke care in a large urban academic medical center. In addition to being predominantly resident-driven, the intervention is unique, in that it successfully obtained full institutional support of an educational endeavor. It thus serves as an excellent example of what can be achieved when educational and institutional goals are aligned. The article also highlights the importance of adopting adjunctive software that augments established electronic health record systems to improve the quality of care delivered to a target population. In the pilot described by the authors, a decision support tool was utilized that tailored evidence-based recommendations for VTE prevention and stroke care to the patients being evaluated. The study offers a first look into whether resident behavior can be modified by the use of financial incentives. We know from earlier research that physicians’ performance can be changed, for better or for worse, based on financial incentives. The most illustrative example of financially motivated behavioral changes comes from a 2004 United Kingdom (UK) experiment. Family medicine physicians were incentivized to adhere to 136 clinically based core measures, known collectively as the Quality and Outcomes Framework. As we now know, the results were astounding, in that the payouts reached 83.4% of available incentive payments within the first year of the program and increased to 97.8% by 2007. Hussain et al have shown that resident behavior can similarly be altered by financial motivations. As educators, it is encouraging to know that an educational program can achieve results of this magnitude across multiple disciplines and at the same time be completely self-policed and self-maintained. Not only does this exemplify the importance of institutional buy-in, but it also demonstrates the success that can be achieved through engagement of front-line staff rather than through blanket edicts. To date, most residency programs have predominately payed lip service to the quality movement, asking trainees to understand the relevance of quality metrics and pay-for-performance by using hypothetical scenarios rather than real life situations. While this program demonstrated resounding success in achieving the designated performance measures, a number of concerns spring to light related to the integrity of the pay-for-performance concept. The performance metrics used in the authors’ study were process measures, chosen for their ease of measurement and the ease with which they can define success. The reader is left with the age-old question of whether achieving these measures had an impact on clinically relevant outcomes, and if so, were the outcomes such that the financial input necessary to achieve them was reasonable. We know that to implement such a broad and far-reaching residentrun program using additional electronic health record decision support, a $250,000 startup cost was required. It would be interesting to know whether the hospital realized equivalent cost savings as a result of this initiative. Similar to questions raised by the UK experiment, with the intervention described by Hussain et al there are issues to be considered on a local level. The achievement gap discussed in this article was extremely narrow, with compliance rates for VTE performance measures increasing from 88.5% at baseline, to 94.2% at 6 months, and an astounding 100% at 12 months. Similarly, the performance measures for stroke care improved from a baseline of 88%, to 96.6% in 6 months, to 100% compliance at 12 months. While impressive, one could argue that with the preimplementation baseline being so high, such improvements have little practical meaning in terms of both cost savings and patient outcome benefits. Readers also should be wary any time a performance indicator achieves and maintains longterm sustainability at 100% compliance. While we revel in these outcomes, ‘‘to err is human,’’ even with decision support programs. The 100% success rates compel us to ask: Were there patients who were inappropriately excluded from the denominator DOI: http://dx.doi.org/10.4300/JGME-D-16-00105.1
Read full abstract