Abstract

BackgroundSeveral papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is also required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. To capture whether and how implementation is assessed within systematic effectiveness reviews, we developed a checklist for implementation (Ch-IMP) and piloted it in a cohort of reviews on provider-based prevention and treatment interventions for children and young people. This paper reports on the inter-rater reliability, feasibility and reasons for discrepant ratings.MethodsChecklist domains were informed by a framework for program theory; items within domains were generated from a literature review. The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth. Two raters independently extracted information on 47 items. Inter-rater reliability was evaluated using percentage agreement and unweighted kappa coefficients. Reasons for discrepant ratings were content analysed.ResultsKappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review level. Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.ConclusionsThe case of provider-based prevention and treatment interventions showed relevancy in developing and piloting the Ch-IMP as a useful tool for assessing the extent to which systematic reviews assess the quality of implementation. The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-015-0037-7) contains supplementary material, which is available to authorized users.

Highlights

  • Several papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials

  • The evaluation of complex interventions seeks to determine whether prevention and treatment interventions work but ‘what works in which circumstances and for whom?’ This phrase was originally coined by Pawson and Tilley [1] to reflect the logic of inquiry of the realist paradigm aimed at unpacking how interventions work to generate outcomes

  • This paper reports on the development of the checklist, its inter-rater reliability, reasons for discrepant ratings and feasibility of the checklist to assess ‘implementation in context’ for systematic reviews focusing on the delivery of provider-based prevention and treatment programs targeting children and youth

Read more

Summary

Introduction

Several papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. Process evaluation is an important component of an overall evaluation because it can help explain negative, modest and positive intervention effects, provide insight into the causal mechanisms of change including the conditions under which mediators are activated, and unpick those aspects of a multi-method/format (i.e., structured vs unstructured) intervention contributing to hypothesised intermediate and longer term outcomes [6,7,8,9,10,11,12]. It is recommended that process evaluations include information on reach, dose delivered, dose received, fidelity, recruitment and the contextual factors that influence implementation [12]. The inclusion of information on intermediate variables leading to hypothesised outcomes, formative/pre-testing procedures, and quality assurance measures is recommended [12]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call