Is there too much focus on measuring and reporting quality rather thanontheconditionsneededfor improving it?TheCenters for Medicare & Medicaid Services (CMS) and other organizations require physicians and hospitals to publicly report performance on quality measures, and the CMS and privatepayersare tyingreimbursementpartly todata fromsuch measures in pay-for-performance programs. However, as the director of an intensive care unit performance improvement program, I know that it is difficult—and sometimes counterproductive—to try to improve a complex systemsimply by rewarding or penalizing the results. Holdinghealthcareprofessionalsand institutionsaccountable for qualitymetrics can backfire. For example, because reported qualitymeasures are limited in number and reflect national rather thanlocalpriorities, theymaydivertattentionfrom other, perhaps more important, problems in individual hospitals—a formof teaching to the test.Efforts to improveperformancecanalso leadtogaming, throughchanges indocumentation and coding, or even changes in clinical practice. As examples, health care professionals and institutions may avoid high-riskornonadherentpatients,1basetriagedecisionsontheir effectonperformancemeasures (suchaschoosingnot toadmit patientswhoare likely tobereadmittedfromtheemergencydepartment to reduce readmission rates), or omit screening that might identify conditions, such as hospital-acquired venous thromboembolism, that could reflect poorly onperformance.2 What is less frequently discussed, but just as important, is thatpublic reportingandpay-for-performance systems shift the focus of quality improvement todocumentation. In sodoing, these efforts take quality improvement out of the hands of clinicians anduncouplemeasurement from its clinical context. The hope is that the required measurements will jumpstart a cycle of continuousquality improvement inwhichdata areused tohonepractice.However, there is no guarantee that datawill be soused. Indeed, the tasks ofmeasurement and reporting fully occupymany hospital quality improvement departments, leaving fewresources for actually improvingmedicalpractice.Toensurestandardization,eachmeasuregenerally requires aheftymanual to specifymethods and sometimes its own information technology and specialized staff. This bureaucraticwork usually falls to nonclinical (or nonpracticing) staff; theymayhave little understandingof, or authority over, processes on the wards. In practice, such staff may deal almost entirely with improvements based on building documentation into the flow of work or modifying coding, creating the illusion of improved performance. Hospitalhallwaysarefullofdisplaysofchartsshowingprogress on various qualitymeasures; hospital leadersmeet to discuss qualitymetrics, and administrators sendnewsletters that congratulate staff on accomplishing quality goals.However, in many hospitals, patient care is largely unaffected. Busy physiciansandnursesrushbyhallwaydisplaysanddonotreadnewsletters thatreportqualitymetrics.Whentheypayattention, they tend to regard the data with skepticism: after all, they do not perceive much change save perhaps for some additional requirements for documentation. Few clinicians sit on quality committees, andstill fewerhavea role in theactual implementation of quality improvement projects. The findings of a study3 presented in this issue of JAMA InternalMedicine reinforceconcernsabout theunintendedconsequencesofpublic reportingandpay forperformanceandalso suggest a gapbetweenquality improvement activities andpatient care. Lindenauer et al3 surveyed hospital leaders (chief executiveofficersandexecutives responsible forquality) about publicly reported quality measures required by the CMS. Althoughmost respondents said that theyused themeasures extensively, more than half were concerned that the measures encouraged teaching to the test, and almost half reported trying to maximize performance primarily through changes in documentationandcoding.Also important is thathalf ormore believed that the CMSmeasures did notmeaningfully distinguishamonghospitalsoraccurately reflectqualityofcare,even for conditions specifically targeted by themeasures. In short, the study findings suggest that many hospital leaders doubt theclinical relevanceof thesemeasures.This skepticismisconsistentwithnational data: studies of public reporting andpayfor-performance programs in the United States have failed to demonstrate a clear connection to improved quality.4,5 Howcan these results beexplained?The respondentsmay haveunderstood that althoughpublicly reportedmeasuresare highly influential,muchof their effect doesnot reach thebedside. Thismaybeclearest to thosemost closely involved in the mechanics ofmeasurement and reporting. Executives specifically responsible for quality (eg, chief quality officers) were more than twice as likely as chief executive officers to believe that hospitals attempted to maximize performance on mortality and readmissionsmeasuresprimarily by changingdocumentation and coding andmuch less likely to believe that the measureswere clinicallymeaningful fordifferentiatingamong hospitals. Therewas generally less skepticismabout the clinical relevance of measures of process and patient experience, such as use of venous thromboembolism prophylaxis and patient satisfaction, than about outcome measures, such as mortalRelated article page 1904 Research Original Investigation Health Care Quality Attitudes
Read full abstract