Abstract
BackgroundEvidence-based approaches are requisite in evaluating public health programmes. Nowhere are they more necessary than physical activity interventions where evidence of effectiveness is often poor, especially within hard to reach groups. Our study reports on the quality of the evaluation of a government funded walking programme in five ‘Walking Cities’ in England. Cities were required to undertake a simple but robust evaluation using the Standard Evaluation Framework (SEF) for physical activity interventions to enable high quality, consistent evaluation. Our aim was not to evaluate the outcomes of this programme but to evaluate whether the evaluation process had been effective in generating new and reliable evidence on intervention design and what had worked in ‘real world’ circumstances.MethodsFunding applications and final reports produced by the funder and the five walking cities were obtained. These totalled 16 documents which were systematically analysed against the 52 criteria in the SEF. Data were cross checked between the documents at the bid and reporting stage with reference to the SEF guidance notes.ResultsGenerally, the SEF reporting requirements were not followed well. The rationale for the interventions was badly described, the target population was not precisely specified, and neither was the method of recruitment. Demographics of individual participants, including socio-economic status were reported poorly, despite being a key criterion for funding.ConclusionsOur study of the evaluations demonstrated a missed opportunity to confidently establish what worked and what did not work in walking programmes with particular populations. This limited the potential for evidence synthesis and to highlight innovative practice warranting further investigation. Our findings suggest a mandate for evaluability assessment. Used at the planning stage this may have ensured the development of realistic objectives and crucially may have identified innovative practice to implement and evaluate. Logic models may also have helped in the development of the intervention and its means of capturing evidence prior to implementation. It may be that research-practice partnerships between universities and practitioners could enhance this process. A lack of conceptual clarity means that replicability and scaling-up of effective interventions is difficult and the opportunity to learn from failure lost.
Highlights
Evidence-based approaches are requisite in evaluating public health programmes
The funded cities were expected to use the Standard Evaluation Framework (SEF) tool for systematic and detailed description of the intervention to enable replication, evidence synthesis and wider implementation [3], yet it was found that this requirement was not well followed
Decisions on implementing programmes can only be based on the best evidence available at the time, and it is essential that evaluators generate the best, and most robust evidence possible to help build the scientific case on what works and what does not in practice
Summary
Evidence-based approaches are requisite in evaluating public health programmes Nowhere are they more necessary than physical activity interventions where evidence of effectiveness is often poor, especially within hard to reach groups. The adoption of an evidence-based approach in the design, delivery and evaluation of public health programmes and interventions is an increasing requirement of programme funding [1]. Evidence based medicine is often built on the ‘gold standard’ of design validity in randomised controlled trials. This is rarely an appropriate methodology for multifaceted, complex public health interventions where there are likely to be a number of interacting components and where contextual relevance is crucial [3, 4]
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have