Abstract

As preventive initiatives gather momentum in the UK, the need for rigorous evaluation of effective practice becomes more urgent. However, researchers, practitioners and policy makers may have different priorities when it comes to implementing evaluations in community settings. This article considers the competing demands that may be placed on evaluators in relation to three dimensions: the service (characteristics of the intervention itself); the sample (people who are participating); and the methodology or research design. It explores compromises that may be required between scientific ideals and real‐world limitations, and assesses the implications for obtaining meaningful results in evaluation research. Copyright © 2001 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call