Abstract

Large-scale, randomised social experiments remain rare in Britain despite random assignment being widely regarded as the gold-standard evaluative methodology. Random assignment involves randomly allocating potential programme recipients to one or more groups that receive a service and others that do not. One perceived impediment to randomised social experiments is the practical difficulty of implementing them in the field. This article reports on research on the implementation of the largest randomised social policy experiment yet undertaken in Britain – the Employment Retention and Advancement (ERA) evaluation. Such ‘evaluations of evaluations’ rarely have been done within randomised experiments, and the article highlights some of the tensions between operational realities and research ambitions in such experiments and suggests ways that researchers can attempt to resolve these tensions in the context of real-world programmes and institutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call