Abstract

Despite the importance of implementing research findings into nursing and health care practice in order to improve care delivery and outcomes, fewer than 50% of these findings ever make it into the practice environments for which they were intended.1 Many factors affect the uptake of evidence-based practices (EBPs) into routine clinical use, such as the complexity of the innovation, how it is implemented into practice, and the context/environment in which it is implemented. Identification and understanding of the local context are key to developing strategies for overcoming barriers that can impair the uptake, adaptation, and implementation of EBPs in a particular setting. The relevance and measurement of clinical context and how it affects the adoption of new knowledge into practice is one of the most pressing issues for research priorities in the 2026 plan from the National Institutes of Health,2 and for those of the American Nurses Credentialing Center3 and the American Association of Critical-Care Nurses.4 The purpose of this column is to review key contextual factors that affect nurses’ adoption of EBPs in clinical settings and instruments that are commonly used to measure context. We also discuss how to evaluate the role of contextual determinants when implementing EBPs, using a study by Reynolds et al5 as an example.Despite its important impact on implementation outcomes, context is poorly understood and reported.6,7 Indeed, the word context is often used as a catchall term to describe the culture, environmental factors, and system characteristics that affect implementation and adoption of EBPs. When authors report context, they often present it as a list of factors instead of providing a meaningful definition.6 Context is a dynamic and multidimensional construct with dimensions that may exist at the local (ie, individual or unit), organizational, or external health system levels. Some dimensions such as leadership, culture, financial resources, and resource availability may be viewed as being multilevel.6Implementation science (IS) is the “scientific study of methods to promote the systematic update of research findings and other evidence-based practices into routine practice, and hence, to improve the quality and effectiveness of health services.”8 Numerous IS determinant theories, models, and frameworks have been developed in order to help researchers to understand the important contextual factors that influence implementation outcomes (or implementation success). Nilsen and Bernhardsson9 reviewed IS determinant theories, models, and frameworks, and despite considerable variation in terms, they identified the most widely addressed context determinants for implementing EBPs in health care settings: organizational support, financial resources, social relations and support, leadership, organizational culture and climate, and organizational readiness to change. Table 1 defines these context dimensions and gives examples of how they might affect implementation.Assessment of context when implementing an EBP in a particular setting is key for developing strategies to overcome identified barriers that can hinder EBP adoption and implementation. Several reliable and valid measures have been developed for this purpose, and a conceptual theory, model, or framework underpins some of them.Use of a theory, model, or framework to guide IS research is recommended in order to enhance the likelihood of successful adoption and implementation.9 The Promoting Action on Research Implementation in Health Services (PARiHS) framework is a widely used IS determinant framework.11 This conceptual framework postulates that 3 main components interact to influence successful implementation: (1) the strength of the evidence being implemented, (2) facilitation of the innovation, and (3) the context of the environment. Each of the 3 PARiHS components must be strong in order to render success during EBP implementation. The context component has 3 elements: culture, leadership, and evaluation. Several instruments have been developed to complement the PARiHS framework and measure context; an overview of instruments that measure clinical context can be found in Table 2.The most frequently reported of these instruments is Alberta Context Tool (ACT),6 a measure of organizational context that is intended for determining clinicians’ perceptions of a practice environment’s readiness to adopt EBPs.12 The ACT has been used in various countries, populations, and settings. Its 8 domains include culture, leadership, evaluation, formal interactions, informal interactions, social capital, structural and electronic resources, and organizational slack (eg, staff, space, and time resources). Like most context measures, the ACT is administered at the outset of implementation. Higher scores indicate a stronger, more positive context for implementing EBP. The ACT has a strong foundation in theory and its results can be analyzed at the individual or organizational level.The PARiHS framework also underpins the Context Assessment Index (CAI),13 which seeks to identify, at the unit level, weak and strong context areas in 3 domains—culture, leadership, and evaluation—for the purpose of developing an action plan. The domain scores can be summed in order to show the existing context that enhances or hinders person-centered care and the clinical area’s receptiveness to change. Indicators of a unit with strong context are clearly defined boundaries, appropriate and transparent decision-making processes, clear understanding of power and authority, and receptiveness to change. The CAI has been used to identify strengths and weaknesses and to develop an improvement plan, but it has not been widely used for investigating associations between context, implementation strategies, and implementation outcomes.The PARiHS framework also is the theoretical framework for the Organizational Readiness to Change Assessment14; this instrument measures organizational-level variables that are posited to influence implementation of clinical EBPs, and the assessment focuses on the specific EBP being implemented. The instrument comprises 3 scales with subscales that measure the strength of the evidence, the context of the environment (or setting) in which the proposed change will take place, and the facilitation (or support) needed in order to help people change their attitudes, behaviors, skills, and ways of thinking. The Organizational Readiness to Change Assessment does not sum scores from the 3 scales into an overall score. It assesses variation by facility or organization to provide an overall indication of the likelihood of success at baseline. The instrument is sensitive to change and so can be readministered during an implementation initiative in order to evaluate contextual changes over time.The organizational theory developed by Weiner15 underpins the Organizational Readiness to Implement Change questionnaire,16 which measures the shared belief among an organization’s members of that organization’s readiness for change. The measure’s 2 domains are change commitment and change efficacy, which can be described as the organization’s desire and capability to change, respectively. Domain scores can be summed to get a total score, and scores can be compared across sites. The Organizational Readiness to Implement Change questionnaire supports organizations in documenting their members’ readiness to develop individual- and context-specific interventions for the purpose of guiding the organization to identify strategies and resources that are relevant to the contextual factors identified.The Advancing Research and Clinical Practice Through Close Collaboration (ARCC) model is a structured conceptual framework to support practice change within an organization.17 The ARCC model posits that through formal education and skills building, a cadre of EBP mentors can be formed in order to foster implementation and sustainability across a hospital system. Three measures have been developed for measuring key concepts of the ARCC model.17 The EBP Beliefs Scale, the EBP Implementation Scale,18 and the Organizational Culture and Readiness Scale for Systemwide Integration of Evidence-Based Practice19 were recently shortened and are now unidimensional instruments. The short version of the EBP Beliefs Scale measures a clinician’s beliefs about the value of EBP and their ability to implement it20; such beliefs have been shown to strongly influence the adoption of new EBPs.21,22 The short version of the EBP Implementation Scale measures a clinician’s experience with actually implementing EBP, whereas the short version of the Organizational Culture and Readiness Scale for Systemwide Integration of Evidence-Based Practice measures perceived organizational culture and readiness for integration of EBP,20 which has been shown to influence clinicians’ beliefs about EBP and the extent to which they implement it in practice.23–25 The 3 measures each have good construct validity and reliability in the nursing context; however, it is not yet known whether the shortened versions are predictive of EBP implementation and competency over time.20 The longer forms may still be desirable if organizations or units need targeted interventions or want to reassess context over time.Other instruments not associated with a particular conceptual theory, model, or framework have been developed in order to measure the context for evidence implementation. These include the Implementation Climate Scale (ICS) and the Implementation Leadership Scale. The ICS assesses individuals’ perceptions of what is expected, rewarded, supported, and recognized regarding EBP implementation within a unit.26,27 To measure the extent to which an employee perceives their unit to prioritize and value EBP, the ICS uses 6 domains: the unit’s focus on EBP, staff recognition for using EPB, educational support for EBP, rewarding staff for using EBP, selecting/hiring staff who value or use EBP, and selecting/hiring staff who are open to innovation. The ICS has been validated in the nursing context at the unit level.27The Implementation Leadership Scale measures frontline nurses’ perceptions of their nurse managers’ implementation leadership in 4 domains: proactivity, knowledge, support, and perseverance.28 This instrument can help clinicians, researchers, and leaders in nursing contexts to assess frontline managers’ leadership during implementation, to develop and deliver interventions to target areas that need to be improved, and to improve implementation of EBP. The Implementation Leadership Scale is the one of the most-used instruments for measuring health professionals’ leadership traits and behaviors,29 and it demonstrates good construct validity with registered nurses (RNs).Finally, structural factors that characterize staff, a unit, or an organization can be collected as measures of context. Common structural factors that may offer insight into the context for implementing EBPs at the staff level are age, shift, highest clinical degree, years of experience in the profession, and years of experience on the unit. Factors at the unit level may include bed capacity, average daily census, average patient age, skill mix, nurse hours per patient day, and case mix index.30 At an organizational level, contextual structural factors often include number of beds, ownership type, location (urban, rural), and Magnet designation.9,31In order to choose the appropriate measure(s) for assessing a particular EBP implementation construct—for example, organizational readiness—several things should be considered. First, using an IS determinant theory, model, or framework to guide a project may aid in identifying an appropriate measure(s). In addition, answering the following questions may be helpful: What contextual factors or barriers are anticipated, and will the measure assess them in sufficient detail? Will the EBP being implemented be affected by contextual factors at the individual, local/unit, or organizational level?30 Is the purpose of the assessment to measure the general capacity for EBP adoption and develop strategies to improve the culture for EBP implementation, or is it to identify and address barriers for a specific evidence-based intervention being implemented? Most context measures have relatively weak, if any, psychometrics for determining their sensitivity to change over time and predictive validity. Thus, the purpose of the assessment, reported prior experience with the measure, and how the scores will be used and evaluated should be considered when selecting a measure.The evaluation of context in IS research can use quantitative, qualitative, or mixed-methods approaches. Context measures in IS studies have guided data collection, aided in descriptive analyses, and been used to investigate the association between context, particular implementation strategies, and implementation success.6 To exemplify how context might be evaluated, we consider a study by Reynolds et al.5 The main purpose of that stepped wedge cluster randomized IS trial was to improve documentation compliance for chlorhexidine gluconate bathing, a well-supported infection prevention EBP, and compliance with the appropriate chlorhexidine gluconate bathing process (per the Agency for Healthcare Quality and Research protocol) in 14 critical care inpatient units at 2 organizations. The study used an IS process model (the implementation model developed by Grol et al10) to help guide the selection of implementation strategies, which included educational outreach visits and audit and feedback. Investigators administered the CAI to unit champions at study baseline to assess unit-level readiness to implement evidence. A total of 30 unit champions completed the CAI, and their responses assured investigators that congruence existed between the level of measurement and the level of analysis planned. Higher CAI scores indicate a stronger context.Reynolds et al5 also assessed at the outset of the trial 12 structural characteristics associated with unit-level quality performance, including use of central catheters, RN hours per patient day, staff turnover, RN skill mix, length of stay, average RN age, numbers of beds, and RN full-time equivalents (see Table 3 for the full list). They then evaluated, using generalized linear mixed modeling, the extent to which the CAI total score and the structural characteristics moderated the relationship between the implementation strategies and outcomes.Results for bathing process compliance showed a negative significant interaction between the intervention and the CAI score (b = −0.90, P < .001), indicating that for each 1-point increase in CAI score there was a −0.90 decrease in the effect of the intervention (educational outreach visits and audit and feedback) on process compliance. Lower readiness to implement, fewer beds, shorter hospital length of stay, fewer RN full-time equivalents, and more RN hours per patient day were associated with an increase in the effect of the implementation strategies on the process compliance outcome (Table 3). For documentation compliance, higher readiness to implement, fewer unit admissions per month, and younger average age among nursing assistants were associated with an increase in the effect of the strategies on the documentation compliance outcome.All but one of the contextual determinants that affected implementation success not only aided in the interpretation of the findings but also had the potential to add value in planning for dissemination of the findings to other settings and circumstances. One finding that does not make sense, however, is the moderating effects of lower (CAI-determined) readiness associated with higher implementation success for process compliance. Is it possible that nurses on units with lower readiness for implementing an EBP for bathing benefited more from the implementation strategies than did those nurses on units with higher readiness to implement? This logic is inconsistent with the construct, readiness, or receptiveness to change. It is more likely that the CAI instrument did not perform reliably in this instance, which probably reflects its stated purpose, that is, to aid in developing action plans in order to address weaknesses at a unit level. This study demonstrated that the CAI measure did not perform well as a quantitative measure for modeling the association between context and implementation success.5 Other weaknesses around measuring and evaluating context in the study by Reynolds et al5 included failure to provide a meaningful definition of context, failure to use a determinant IS model to guide context assessment, and the small sample of participants who completed the CAI (n = 30).Evidence-based practice is a problem-solving approach that informs clinical decision-making by combining research evidence with a clinician’s expertise and a patient’s personal preferences and values.22 Failure to implement evidence from well-designed research studies hinders the use of EBPs that improve health care outcomes. If populations are to benefit from scientific discoveries that can improve outcomes, it is critically necessary for us to identify and understand the individual and organizational barriers to adoption, implementation, and sustainment of EBPs in clinical settings. When implementing evidence-based interventions, we should identify, measure, evaluate, and address relevant contextual determinants with strategies that are designed to minimize their impact. When selecting an EBP implementation context measure, understanding the measure’s purpose and how it has been used previously to support and inform implementation will help us to ensure proper fit of the selected measure with the intended purpose.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call