Abstract

AbstractRegression discontinuity (RD) designs are increasingly common in political science. They have many advantages, including a known and observable treatment assignment mechanism. The literature has emphasized the need for “falsification tests” and ways to assess the validity of the design. When implementing RD designs, researchers typically rely on two falsification tests, based on empirically testable implications of the identifying assumptions, to argue the design is credible. These tests, one for continuity in the regression function for a pretreatment covariate, and one for continuity in the density of the forcing variable, use a null of no difference in the parameter of interest at the discontinuity. Common practice can, incorrectly, conflate a failure to reject evidence of a flawed design with evidence that the design is credible. The well-known equivalence testing approach addresses these problems, but how to implement equivalence tests in the RD framework is not straightforward. This paper develops two equivalence tests tailored for RD designs that allow researchers to provide statistical evidence that the design is credible. Simulation studies show the superior performance of equivalence-based tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015b).

Highlights

  • The regression discontinuity (RD) design is an observational causal identification strategy used to study the impact of a deterministic treatment assignment mechanism, such as the incumbency effect for a party that wins a close election

  • While the literature lays out compelling structural theories for why sorting is unlikely in electoral settings, statistical evidence from the equivalence tests, in which we aim to reject a null hypothesis that the data are inconsistent with a valid Regression discontinuity (RD) design, has mixed evidence supporting the design across geographies

  • When a researcher cannot control the assignment mechanism in her study, causal identification will always rely on a set causal identification assumptions that cannot be tested directly with observed data

Read more

Summary

Introduction

The regression discontinuity (RD) design is an observational causal identification strategy used to study the impact of a deterministic treatment assignment mechanism, such as the incumbency effect for a party that wins a close election. RD designs are typically thought to require relatively weak assumptions compared to other common analysis techniques for observational studies, such as regression or instrumental variables ). While the assumptions may be weaker than some observational methods, RD designs still rely on strong causal identification assumptions. While the necessary assumptions cannot be directly empirically tested, the literature suggests that researchers should ( ) consider theoretical mechanisms under which RD designs could be invalidated and ( ) use falsification tests, that is, statistical hypothesis tests of observable implications of the necessary assumptions, to bolster their claims that the RD design is credible As Eggers et al ( b, p. ) state, “the burden of proof is on the researcher to justify her assumptions and subject them to rigorous testing.” While the necessary assumptions cannot be directly empirically tested, the literature suggests that researchers should ( ) consider theoretical mechanisms under which RD designs could be invalidated and ( ) use falsification tests, that is, statistical hypothesis tests of observable implications of the necessary assumptions, to bolster their claims that the RD design is credible

Objectives
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call