BackgroundAlthough innovation can improve patient care, implementing new ideas is often challenging. Previous research found that professional attitudes, shaped in part by health policies and organisational cultures, contribute to differing perceptions of innovation ‘evidence’. However, we still know little about how evidence is empirically accessed and used by organisational decision-makers when innovations are introduced.Aims and objectivesWe aimed to investigate the use of different sources and types of evidence in innovation decisions to answer the following questions: how do managers make sense of evidence? What role does evidence play in management decision-making when adopting and implementing innovations in health care? How do wider contextual conditions and intraorganisational capacity influence research use and application by health-care managers?MethodsOur research design comprised multiple case studies with mixed methods. We investigated technology adoption and implementation in nine acute-care organisations across England. We employed structured survey questionnaires, in-depth interviews and documentary analysis. The empirical setting was infection prevention and control. Phase 1 focused on the espoused use of evidence by 126 non-clinical and clinical hybrid managers. Phase 2 explored the use of evidence by managers in specific technology examples: (1) considered for adoption; (2) successfully adopted and implemented; and (3) rejected or discontinued.Findings(1) Access to, and use of, evidence types and sources varied greatly by profession. Clinicians reported a strong preference for science-based, peer-reviewed, published evidence. All groups called upon experiential knowledge and expert opinion. Nurses overall drew upon a wider range of evidence sources and types. Non-clinical managers tended to sequentially prioritise evidence on cost from national-level sources, and local implementation trials. (2) A sizeable proportion of professionals from all groups, including experienced staff, reported difficulty in making sense of evidence. Lack of awareness of existing implementation literature, lack of knowledge on how to translate information into current practice, and lack of time and relevant skills were reported as key reasons for this. (3) Infection outbreaks, financial pressures, performance targets and trusted relationships with suppliers seemed to emphasise a pragmatic and less rigorous approach in sourcing for evidence. Trust infrastructure redevelopment projects, and a strong emphasis on patient safety and collaboration, appeared to widen scope for evidence use. (4) Evidence was continuously interpreted and (re)constructed by professional identity, organisational role, team membership, audience and organisational goals. (5) Doctors and non-clinical managers sourced evidence plausible to self. Nursing staff also sought acceptance of evidence from other groups. (6) We found diverse ‘evidence templates’ in use: ‘biomedical-scientific’, ‘practice-based’, ‘rational-policy’. These represented shared cognitive models which defined what constituted acceptable and credible evidence in decisions. Nurses drew on all diverse ‘templates’ to make sense of evidence and problems; non-clinical managers drew mainly on the practice-based and rational-policy templates; and doctors drew primarily on the biomedical-scientific template.ConclusionsAn evidence-based management approach that inflexibly applies the principles of evidence-based medicine, our findings suggest, neglects how evidence is actioned in practice and how codified research knowledge inter-relates with other ‘evidence’ also valued by decision-makers. Local processes and professional and microsystem considerations played a significant role in adoption and implementation. This has substantial implications for the effectiveness of large-scale projects and systems-wide policy.FundingThe National Institute for Health Research Health Services and Delivery Research programme.
Read full abstract