Abstract

Treatment effect estimates from a regression discontinuity design (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the evaluation population of interest suggests that RDD evaluations lack external validity. New methodological research offer strategies for studying and sometimes improving external validity in RDDs. This article examines four techniques: comparative RDD, covariate matching RDD, treatment effect derivatives, and statistical tests for local selection bias. The goal of the article is to help evaluators understand the logic, assumptions, data requirements, and reach of the new methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.