Abstract

This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most effective program design; increasing external validity; and learning how various factors affect program impact. The paper discusses reasons why program design varies across sites, including adaptations to local environment and participant characteristics and a lack of fidelity to the program design. The paper discusses why programs vary across sites and when it is desirable to maintain consistency. Distinctions are drawn between evaluations of pilots and demonstrations versus ongoing programs, as well as programs where variation is permitted or encouraged versus when a fixed design is desired. The paper includes illustrations drawn from evaluations of demonstrations and ongoing programs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.