Abstract

HomeCirculationVol. 119, No. 14Quality Improvement Free AccessResearch ArticlePDF/EPUBAboutView PDFView EPUBSections ToolsAdd to favoritesDownload citationsTrack citationsPermissions ShareShare onFacebookTwitterLinked InMendeleyReddit Jump toFree AccessResearch ArticlePDF/EPUBQuality ImprovementScience and Action Henry H. Ting, MD, MBA, Kaveh G. Shojania, MD, Victor M. Montori, MD, MSc and Elizabeth H. Bradley, PhD Henry H. TingHenry H. Ting From the Knowledge and Encounter Research Unit (H.H.T., V.M.M.), Division of Cardiovascular Diseases (H.H.T.), and Division of Endocrinology (V.M.M.), Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minn; Department of Medicine, Sunnybrook Health Sciences Centre and University of Toronto, Toronto, Ontario, Canada (K.G.S.); and Robert Wood Johnson Clinical Scholars Program, Department of Internal Medicine and Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Conn (E.H.B.). Search for more papers by this author , Kaveh G. ShojaniaKaveh G. Shojania From the Knowledge and Encounter Research Unit (H.H.T., V.M.M.), Division of Cardiovascular Diseases (H.H.T.), and Division of Endocrinology (V.M.M.), Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minn; Department of Medicine, Sunnybrook Health Sciences Centre and University of Toronto, Toronto, Ontario, Canada (K.G.S.); and Robert Wood Johnson Clinical Scholars Program, Department of Internal Medicine and Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Conn (E.H.B.). Search for more papers by this author , Victor M. MontoriVictor M. Montori From the Knowledge and Encounter Research Unit (H.H.T., V.M.M.), Division of Cardiovascular Diseases (H.H.T.), and Division of Endocrinology (V.M.M.), Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minn; Department of Medicine, Sunnybrook Health Sciences Centre and University of Toronto, Toronto, Ontario, Canada (K.G.S.); and Robert Wood Johnson Clinical Scholars Program, Department of Internal Medicine and Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Conn (E.H.B.). Search for more papers by this author and Elizabeth H. BradleyElizabeth H. Bradley From the Knowledge and Encounter Research Unit (H.H.T., V.M.M.), Division of Cardiovascular Diseases (H.H.T.), and Division of Endocrinology (V.M.M.), Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minn; Department of Medicine, Sunnybrook Health Sciences Centre and University of Toronto, Toronto, Ontario, Canada (K.G.S.); and Robert Wood Johnson Clinical Scholars Program, Department of Internal Medicine and Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Conn (E.H.B.). Search for more papers by this author Originally published14 Apr 2009https://doi.org/10.1161/CIRCULATIONAHA.108.768895Circulation. 2009;119:1962–1974Outcomes research examines the effects of healthcare interventions and policies on health outcomes for individual patients and populations in routine practice, as opposed to the idealized setting of clinical trials. A national survey from 1998 to 2000 that evaluated the extent to which patients received established processes of care for 30 medical conditions illustrated the importance of outcomes research.1 Among adults living in 12 metropolitan areas in the United States, only half of patients received proven elements of preventive care, treatments for acute illness, and chronic disease management for which they were eligible. For cardiovascular conditions, the use of proven therapies varied widely from 68% to 25% of patients who received recommended care for coronary artery disease and atrial fibrillation, respectively.1Despite these gaps between ideal and actual care, patient outcomes have improved in many fields. For instance, the age-adjusted mortality from cardiovascular disease in the United States fell by >40% from 1980 to 2000 as a result of improvements in risk factor modification and uptake of evidence-based treatments for coronary artery disease, myocardial infarction, and heart failure.2,3 Nevertheless, many Americans do not receive the ideal recommended care (either at all or in a timely fashion), whereas others receive too much or the wrong care.4,5 In the field of cardiovascular diseases, substantial opportunities for improvement remain.Outcomes research has generated a foundation of knowledge about what constitutes ideal care and what gaps exist between ideal and actual care, but we have less understanding about how to deliver this ideal care to every patient every day. The potential for basic science breakthroughs to reach and improve the health of individual patients and populations may be substantially delayed or may not be realized if science is not efficiently translated to action. Moreover, in many cases, increased delivery of established therapies would save more lives than the next innovation in therapy.6 Here, we review the underlying reasons for these gaps between ideal and actual care and potential strategies to address them. The strategies we outline involve primarily activities by clinicians, researchers, managers, and other agents within the healthcare system, but we also highlight the importance of engaging patients as active participants in their own health care as a quality improvement strategy. This article focuses on the current scientific evidence and literature, which come mostly from academic centers and large institutions; less is known about successful strategies for quality improvement in small-scale practices and contextual modifiers (such as practice setting) on quality improvement strategies.7,8Gaps Between Ideal and Actual CareQuality improvement research strives to bridge the gap between ideal and actual care.9 A Clinical Research Roundtable at the Institute of Medicine has defined T1 translational research as “the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans” (p 211)12 and T2 translational research as “the translation of results from clinical studies into everyday clinical practice and health decision making” (p 211).10–12 Westfall and colleagues11 have proposed an additional step, T3 translational research, as “practice-based research to translate distilled knowledge from guidelines and systematic reviews to day-to-day clinical care” (p 211).12 The journey from science to action (ie, T2 and T3) can take decades. Accelerating these translational research steps requires interaction between and collaboration among different skills and disciplines (Table 1). Furthermore, funding for T2 and T3 translational research by various agencies needs to be clarified and prioritized.12,13Table 1. Disciplines Involved in T1, T2, and T3 Translational ResearchT1 Research (Bench to Humans)T2 Research (Humans to Guidelines)T3 Research (Guidelines to Patients)Basic sciencesPhase 3 clinical trialsImplementation and disseminationMolecular biologyObservational studiesSystem redesignGeneticsEvidence synthesis and guidelinesCommunication theoryTechnology assessmentClinical epidemiologyBehavioral and management scienceAnimal researchComparative effectivenessOrganizational developmentPhase I and II clinical trialsPolicy and ethicsPatient encounter researchThe case of using β-blockers in patients after acute myocardial infarction demonstrates the delay between the availability of scientific evidence and widespread practice. The landmark Beta-Blocker Heart Attack Trial (BHAT), published in 1982, showed that the use of β-blockers in patients after acute myocardial infarction lowered mortality at 2 years’ follow-up from 9.8% to 7.2%14; this finding was confirmed in other trials.15–18 However, not until 1996, more than a decade after the original publication of scientific evidence, did the American Heart Association/American College of Cardiology guidelines recommend the routine use of β-blockers for all eligible patients after acute myocardial infarction.19 In 1997, the Joint Commission on Accreditation of Healthcare Organizations and the Center for Medicare and Medicaid Services adopted the prescribing of β-blockers at discharge for patients with acute myocardial infarction as a hospital performance measure for quality of care. In an analysis of the National Registry of Myocardial Infarction data from 1999, nearly 17 years after the publication of the BHAT randomized trial, Bradley and colleagues20 showed that overall only 60% of patients in the United States were prescribed a β-blocker at hospital discharge. Substantial variation was found between the lowest and highest quartiles of hospitals, with prescription rates of 42% and 78%, respectively. Finally, in 2007, ≈25 years after the BHAT randomized trial, the National Committee for Quality Assurance, which collects quality performance data from health maintenance organizations, found that >90% of patients were prescribed β-blockers at discharge.21 This prompted the National Committee for Quality Assurance to stop collecting and reporting this performance measure. The β-blocker journey took a quarter century from science to action, and it can be argued that the journey remains incomplete; Ho and colleagues22 have shown that intermediate adherence to β-blockers at 1 month after discharge was only 70%.Factors Contributing to the Gaps and Strategies to Address ThemApplicability of Randomized Controlled Trials to Quality ImprovementTranslating the findings of randomized controlled trials into routine clinical practice faces several challenges (Table 2). First, randomized controlled trials often focus narrowly on a simple intervention in highly selected patients to establish efficacy (ie, that the therapy works under ideal circumstances). Strict eligibility criteria can result in ≥90% of screened patients not being enrolled. For instance, the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) trial screened 35 539 patients and randomized 2287 patients (6.4%) to optimal medical therapy versus percutaneous coronary intervention and optimal medical therapy.23 Thus, although COURAGE appears to have major implications for the management of stable coronary artery disease in routine practice, the high degree of patient selection in the trial renders efforts to implement the findings from COURAGE problematic (ie, across different patient subgroups that vary in terms of their prognosis and responsiveness to treatment). Table 2. Factors Contributing to the Gaps in Quality of Care and Strategies to Address ThemFactors Contributing to Gaps in QualityStrategies to Address the Gaps in QualityLimited applicability of randomized controlled trials Narrow focus on a simple intervention Strict eligibility criteria for enrollment Trials are typically conducted with resources (infrastructure, staff) that may not be present in community practicesPromote practice-based research that would address the applicability of randomized controlled trials to patients and clinicians in usual practice Determine what strategies need to be implemented concomitantly with a specific intervention to achieve results in routine practice comparable to those in randomized controlled trialsTension between action and evaluation Quality improvement is needed now and should not be delayed by evaluation and research Clinical research methods are not suitable to evaluate complex and rapidly changing quality improvement interventionsAdopt clinical research framework to evaluate distinct phases of quality improvement interventions Differentiate research techniques to apply in early phases (theory, modeling, and exploratory trials) to find candidate quality improvement interventions vs techniques to apply in later phases to evaluate the generalizability of such interventions for widespread disseminationLack of collaboration between academic medical center researchers and community clinicians Academic researchers have focused primarily on patient recruitment for clinical trials Diversity of the community clinicians’ practice locations, sizes, and types of services Competition for community clinicians’ time and resources, particularly under productivity models of reimbursementDevelop practice-based research networks to link the academic researchers with community clinicians Collaborate to identify the gaps between ideal and actual care, prioritize the needs of the community clinicians for strategies to close those gaps, and provide a laboratory for testing system improvements Design practice-based research that both is relevant and generates value to community clinicians and practicesLack of expertise and experience to undertake quality improvement in health care Health care lags in translational efficiency and does not use general methodologies to improve systems compared with other industries Commonly used approaches in healthcare quality improvement such as provider education, provider reminders, and audit and feedback have generally shown small to modest effects on target quality problemsLearn and apply approaches for quality improvement in health care that have been used successfully by other industries Design quality improvement projects with measurement, evaluation, and scholarship Choose specific quality improvement interventions on the basis of a clear understanding of the underlying causes of the targeted quality problem · Pay attention to important mediating effects, including components of the intervention itself and the context in which the interventions are being deliveredDifferences between physicians and managers Professional vs administrative theory Education, socialization, and goalsDevelop models for partnership and shared accountability between physicians and managers with an overall goal for both effectiveness and efficiency of patient care Interdisciplinary education for physicians to learn management science and managers to observe patient care encountersSecond, patients enrolled in clinical trials compared with those in usual clinical practice have greater access to initial and follow-up care, including medications, tests, and monitoring; for instance, clinical trials often have infrastructure and resources to provide medications and tests at nominal or no cost to the patient. Furthermore, clinicians and healthcare organizations participating in clinical trials may represent superior performers in a specific discipline and differ from those in usual practice settings. Although the findings may be quite informative to patients who received care at these clinical trial sites, there are uncertainties in applying the findings to other settings. These differences in access to care and clinician experience may contribute to disparate results from the same intervention applied at different practice settings.Even putting aside issues of whether sites and patients that participated in clinical trials represent the range of real-world settings, these same sites that evaluated the efficacy of a new intervention have not always been able to sustain or diffuse that intervention in routine practice. Majumdar and colleagues24 showed, for example, that sites that had taken part in the Survival and Ventricular Enlargement (SAVE) trial were no more likely to adopt widespread use of angiotensin-converting enzyme inhibitors for patients with acute myocardial infarction than were sites that had not taken part. This observation that passive dissemination does not happen even in the centers that participated in the generation of new knowledge about efficacy emphasizes the degree to which T2 and T3 translation requires active intellectual and capital investments. This need may have increased given the growing number of clinical trials that recruit patients from myriad sites, each of which enrolls only a handful of patients.Strategies to Address the GapPromote practice-based research that would address the applicability of randomized controlled trials to patients and clinicians in usual practice. This research can take the form of practical randomized trials25,26 or well-designed observational studies that control for confounding factors that often affect patient selection and choice of treatments in routine practice.Determine what strategies need to be concomitantly implemented with a specific intervention (eg, additional infrastructure or support personnel) to achieve results in routine practice comparable to those in randomized controlled trials.Tension Between Action and Evaluation in Quality ImprovementThe Institute of Medicine reports on safety and quality4,5 achieved their intended aim of galvanizing all sectors of the healthcare system (providers, payors, policy makers, regulators, and the public) to engage in addressing widespread quality and safety problems. However, the sense of urgency has also created controversy about how best to implement candidate quality improvement interventions in a timely manner while evaluating the extent to which care in fact improved as a result of the intervention27–29 (Table 2). The debate has ranged from pragmatic concerns about evaluating complex interventions in real-world practice to more philosophical debates, with some arguing that quality improvement interventions are intrinsically too complex and change too rapidly to study with standard clinical research methods.28Although some real differences undoubtedly exist between these points of view, much of this debate may reflect blurring of the stages of quality improvement research; techniques used in early phases of research in which candidate interventions are developed at a single center differ from evaluations of the generalizability of such interventions for widespread dissemination (Figure). The UK Medical Research Council has outlined a framework for describing the phases of research for complex interventions, including those related to the organization and delivery of care (eg, disease management clinics for congestive heart failure) and interventions directed at health professionals’ behavior (eg, strategies for increasing uptake of guidelines).30 In this framework, the early phases of research focus on developing quality improvement interventions with efficacy in at least 1 setting, often using methodologies drawn from industrial quality improvement, the social sciences, cognitive psychology, human factors engineering, and organizational theory, among others. These fields and approaches represent the basic sciences of quality improvement, just as cellular biology and molecular biology represent the basic sciences for clinical research. Download figureDownload PowerPointFigure. Continuum of increasing evidence for complex interventions. The figure depicts the progression of research on complex interventions (such as those in quality improvement) by analogy with the phases of clinical research, beginning with preclinical basic science and animal studies and proceeding through to phase III clinical trials and phase IV surveillance studies. In the case of complex interventions, the preclinical phase includes development of a candidate intervention based on theoretical and empirical understanding of the target quality problem. Modeling studies (phase I) identify the key components of the intervention and the mechanisms by which they achieve their intended effects. Exploratory trials (phase II) characterize the version of the intervention that could be disseminated (including distinguishing constant and variable intervention elements) and demonstrate a feasible protocol for comparing the intervention with usual care or some alternative intervention. A definitive randomized controlled trial (phase III), often clustered and almost always multicentered, evaluates the effectiveness of the intervention, providing an estimate of the expected effect magnitude across a range of representative settings. Finally, analogous to postmarket surveillance studies, phase IV studies examine long-term consequences of the intervention, evaluating the sustainability of target effects and the emergence of unintended (adverse) effects. Adapted from the UK Medical Research Council Framework for Evaluating Complex Interventions.30Once the early phases of research produce an intervention that has worked in 1 or a few settings, the next question is its generalizability. In some cases, the interventions are so idiosyncratic to a specific institution that there are no candidate interventions to apply elsewhere (eg, as is often the case with parochial plan-do-study-act [PDSA] projects). In other cases, interventions may not be intrinsically tied to the features of a single institution, so the question of its potential effects across a broad range of settings thus arises. Such interventions (eg, the chronic care model for disease management, medication reconciliation, and crew resource management for improving teamwork) may require modifications during implementation, but core strategies can be identified. Given the direct expenses and opportunity costs of implementing these complex interventions, evaluating the magnitude of effect across a spectrum of settings should precede widespread adoption. Here we enter the later phases of quality improvement research, and here randomized controlled trials offer the most rigorous evaluation of effectiveness. Controlled preimplementation and postimplementation studies and interrupted time series can provide reasonable compromises between the ideal of randomized controlled trials and the practical complexity of carrying out such trials.31,32 However, complexity by itself is not a compelling argument against evaluating quality improvement interventions, when in fact such complex interventions have been studied with randomized controlled trial designs.33–35Strategies to Address the GapDifferentiate research techniques to apply in early phases (theory, modeling, and exploratory trials) to find candidate quality improvement interventions versus techniques to apply in later phases to evaluate the generalizability of such interventions for widespread dissemination.Integrate qualitative methods in traditional quantitative studies of quality improvement to understand the subtle variations in performance and more complex aspects of organizational change and innovation adoption. An example of using qualitative methods is the Door-to-Balloon Quality Alliance, which identified key strategies used by the best-performing hospitals to improve reperfusion times at a national level for patients with ST-elevation myocardial infarction.36,37Lack of Collaboration Between Academic Medical Centers and Community CliniciansResearchers at academic medical centers have historically engaged community clinicians and practices with the primary intent of recruiting patients for clinical trials. Researchers and community clinicians, however, have not typically collaborated to identify the gaps between ideal and actual care, to prioritize the needs of the community clinicians for strategies to close those gaps, and to provide a clinical practice laboratory for testing system improvements (Table 2). The diversity of community clinicians’ practice locations, practice sizes, and types of services, as well as competition between community and academic practices for patients and personnel, presents potential barriers for collaboration. In addition, competition for the community clinicians’ time and resources, particularly under productivity models of reimbursement, limits their ability to engage in translational research.Consider, for instance, that prehospital ECGs used to activate the catheterization laboratory while the patient is en route to the hospital have been shown to decrease door-to-balloon times in patient with ST-elevation myocardial infarction.38,39 Equipment to obtain prehospital ECGs is widely available,40 and paramedics can be trained to interpret or wirelessly transmit the data.41 The primary challenge is not simply being able to obtain these data but rather integrating the prehospital ECG with systems of care to improve processes and patient outcomes. This requires collaboration across historical silos, including academic researchers, community clinicians, and paramedics, who heretofore have not worked together to design patient-centered, seamless, and integrated systems of care.Strategies to Address the GapDevelop practice-based research networks to link the academic researchers with community clinicians. The Agency for Healthcare Research and Quality has been the leader in funding practice-based research networks in primary care and family medicine, but funding for this agency has been disproportionally meager compared with budgets for basic science and T1 translational research. In 2006, the National Institute of Health funded 24 academic centers with Clinical and Translational Science Awards to promote translational and practice-based research.12Design practice-based research that both is relevant and generates value to community clinicians and practices, including, for example, the development of community-based personnel and systems that allow the collection and reporting of data for this research.Promote sharing of best practices and what works in 1 community practice with other practices within and across networks.Lack of Expertise and Experience to Undertake Quality ImprovementHealthcare systems and other industries strive to deliver services or products that have value to their customers, which can be defined as incremental benefit such as quality, safety, and service divided by incremental cost. Andrew Grove, the former chairman of Intel, noted that both healthcare and the microchip industry have highly dedicated and well-trained people who provide a service or product that is based on a foundation of science.42 Beyond the obvious difference that one produces health care and the other produces microchips, another major difference arises from their respective capability and efficiency to translate science to action to deliver value to their customers. Industry has characterized this translational efficiency as knowledge turns, referring to the cycle time required for an experiment to proceed from hypothesis to results and results to products brought to market, before a new cycle is started. The knowledge turns in the microchip industry require only 1 to 2 years, as encapsulated in Moore’s law, the 40-year-old prediction turned empiric observation that the number of transistors that can be practically included on a microchip (the basic determinant of computing speed) doubles every 1 to 2 years.43,44Probably no field of health care has ever achieved knowledge turns such that some important outcome improves by a factor of 2 within such a short timeline, never mind sustaining such a pattern continuously, as has occurred in the microchip industry. In fact, the interaction between health care and the computing industry (eg, the implementation of electronic medical records and computerized order entry) represents a particularly striking example of translational inefficiency and the prolonged duration of knowledge turns in health care. Despite discussion in the medical literature of the promise of information technology dating back to the late 1960s and the widespread, ready availability of relevant technology for at least 15 to 20 years, <1 in 4 medical organizations used any form of electronic medical record in 2006.45,46 Far fewer had an integrated electronic medical record that included outpatient, inpatient, laboratory, and imaging data in a single application or an electronic medical record that was patient centered and portable across different healthcare organizations. This slow and incomplete penetration of computing technology into routine clinical practice contrasts sharply with the rapid and widespread dissemination of sophisticated information systems and e-commerce throughout myriad industries in the last 10 years. Some may point out that the slow adoption of electronic medical record does not reflect irrational refusal by healthcare organizations to modernize current systems of care. Implementation of clinical information systems represents a challenging task and requires a high level of integration, coordination, data interoperability and portability, investment of resources, and methodologies used by industry to improve quality, safety, and reliability at a system level. But, the complexity and demands of these tasks only underscore the need for more structured (and disciplined) approaches to managing change than have previously been used in health care.Structured approaches to management, including standard tools and methods for quality improvement, have been adopted in other industries for decades. W. Edwards Deming introduced these methods to Japanese executives and engineers after World War II, which have transformed automobile manufacturing and resulted in higher quality, faster production speed, and lower cost such as the Toyota Production System.47–49 Health care has failed to achieve comparable performance, partly because of the complex nature of human systems but also because quality improvement methodologies have not been routinely learned or widely used by healthcare professionals. Historically, healthcare organizations have focused on inspection and detection of defects, which suffers from lack of reporting of both actual defects and near misses and does not lead to proactive redesign of systems of care.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call