Abstract

This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions-the Educational Technology Study and the Teach for America Study-and compare the RD impact estimates to the experimental estimates of the same intervention. This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design. We simulate RD analysis files by selectively dropping observations from the original experimental data files. We then compare impact estimates based on this RD design with those from the original experimental study. Finally, we simulate a situation in which some students manipulate the value of the assignment variable to receive treatment and compare RD estimates with and without manipulation. RD and experimental estimators produce impact estimates that are not significantly different from one another and have a similar magnitude, on average. Manipulation of the assignment variable can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable's cutoff value.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call