Abstract

Difference-in-difference (DID) estimators are a valuable method for identifying causal effects in the public health researcher's toolkit. A growing methods literature points out potential problems with DID estimators when treatment is staggered in adoption and varies with time. Despite this, no practical guide exists for addressing these new critiques in public health research. We illustrate these new DID concepts with step-by-step examples, code, and a checklist. We draw insights by comparing the simple 2 × 2 DID design (single treatment group, single control group, two time periods) with more complex cases: additional treated groups, additional time periods of treatment, and treatment effects possibly varying over time. We outline newly uncovered threats to causal interpretation of DID estimates and the solutions the literature has proposed, relying on a decomposition that shows how the more complex DID are an average of simpler 2 × 2DID subexperiments. Expected final online publication date for the Annual Review of Public Health, Volume 45 is April 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call