Abstract

The limited extent to which research evidence is utilised in healthcare and other public services is widely acknowledged. The United Kingdom government has attempted to address this gap by funding nine Collaborations for Leadership in Applied Health Research and Care (CLAHRCs). CLAHRCs aim to carry out health research, implement research findings in local healthcare organisations and build capacity across organisations for generating and using evidence. This wide-ranging brief requires multifaceted approaches; assessing CLAHRCs’ success thus poses challenges for evaluation. This paper discusses these challenges in relation to seven CLAHRC evaluations, eliciting implications and suggestions for others evaluating similarly complex interventions with diverse objectives.

Highlights

  • A persistent feature of healthcare provision worldwide is the gap between evidence-based ‘best practice’ and what is delivered routinely by health practitioners

  • The review made a number of recommendations about how to close the second translation gap, including new funding initiatives and an expansion of the National Health Service (NHS)’s Health Technology Assessment (HTA) programme to facilitate the provision of a high-quality and accessible evidence base to NHS decision makers (Cooksey 2006)

  • We describe the challenges we have faced, and some of the potential solutions we are starting to develop, and which may be of some use to others seeking to evaluate similar ventures in a way that is methodologically defensible, practically useful and pragmatically achievable

Read more

Summary

Background

A persistent feature of healthcare provision worldwide is the gap between evidence-based ‘best practice’ and what is delivered routinely by health practitioners. Use of qualitative methods and shifts in mode of reasoning away from statistical-probabilistic approaches may be increasingly accepted in the academic literature on evaluating complex entities such as CLAHRCs (Grol & Grimshaw 1999; Graham et al 2006; Kontos & Poland 2009), but for those used to traditional biomedical models of evaluation, they remain contentious (Wood et al 1998) This poses challenges in terms of the questions of what the outputs of our evaluations should look like, and what they should seek to provide to the CLAHRCs. Evaluation approaches that incorporate action research and models of social learning (Kolb 1984; Eden & Huxham, 1996; Lave & Wenger 1991; Raelin 2009) are prominent in our work, with a view to ensuring outputs that are useful to practitioners, and embedded into real-world practice improvements. Difficulty in achieving this might lead to the question of what should take priority—internal evaluations, sensitive to the particularities and needs of CLAHRCs, or external evaluations whose priority is generalisable theoretical knowledge?

Conclusion
Summary of CLAHRC approach
Evaluation
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call