Abstract

Comparative interrupted time series (CITS) designs evaluate impact by modeling the relative deviation from trends among a treatment and comparison group after an intervention. The broad applicability of the design means it is widely used in education research. Like all non-experimental evaluation methods however, the internal validity of a given CITS evaluation depends on assumptions that cannot be directly verified. We provide an empirical test of the internal validity of CITS by conducting four within-study comparisons of school-level interventions previously evaluated using randomized controlled trials. Our estimate of bias across these four studies is 0.03 school-level (or 0.01 pupil-level) standard deviations. The results suggest well-conducted CITS evaluations of similar school-level education interventions are likely to display limited bias.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call