Abstract

This article illustrates the value and use of multilevel models to examine site-to-site outcome differences in the analysis of data from multisite evaluations. It shows how a traditional analysis approach can overlook effective interventions and miss important links between program implementation and outcomes. Practical issues regarding statistical power and intervention leakage are also discussed. Using data from an evaluation of an alternative teacher certification program, this illustration finds site-to-site outcome differences and shows how program developers can exploit these differences in an attempt to understand and strengthen an intervention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call