Abstract

Over the past two decades, the program evaluation literature has made great advances on improving methodological approaches for establishing causal inference. The two most significant developments include establishing the primacy of design over statistical adjustment procedures for making causal inferences, and using potential outcomes to specify the exact causal estimands produced by the research designs. This chapter presents four research designs for assessing program effects-the randomized experiment, the regression-discontinuity, the interrupted time series, and the nonequivalent comparison group designs. For each design, we examine basic features of the approach, use potential outcomes to define causal estimands produced by the design, and highlight common issues to consider when using the design in the field. Whenever possible, we use examples to illustrate how these designs have been used to assess program effects. We conclude by suggesting broader issues in program evaluation that the next generation of evaluators should consider. Keywords: program evaluation; experiments; regression-discontinuity; interrupted time series; propensity score matching

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.