Abstract

AbstractThe sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of direct scientific or policy interest. This paper examines how adding an untreated comparison to the basic RDD structure can mitigate these three problems. In the example we present, pretest observations on the posttest outcome measure are used to form a comparison RDD function. To assess its performance as a supplement to the basic RDD, we designed a within‐study comparison that compares causal estimates and their standard errors for (1) the basic posttest‐only RDD, (2) a pretest‐supplemented RDD, and (3) an RCT chosen to serve as the causal benchmark. The two RDD designs are constructed from the RCT, and all analyses are replicated with three different assignment cutoffs in three American states. The results show that adding the pretest makes functional form assumptions more transparent. It also produces causal estimates that are more precise than in the posttest‐only RDD, but that are nonetheless larger than in the RCT. Neither RDD version shows much bias at the cutoff, and the pretest‐supplemented RDD produces causal effects in the region beyond the cutoff that are very similar to the RCT estimates for that same region. Thus, the pretest‐supplemented RDD improves on the standard RDD in multiple ways that bring causal estimates and their standard errors closer to those of an RCT, not just at the cutoff, but also away from it.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call