Abstract

Propensity score matching and weighting methods are often used in observational effectiveness studies to reduce imbalance between treated and untreated groups on a set of potential confounders. However, much of the prior methodological literature on matching and weighting has yet to examine performance for scenarios with a majority of treated units, as is often encountered with programs and interventions that have been widely disseminated or “scaled-up.” Using a series of Monte Carlo simulations, we compare the performance of k:1 matching with replacement and weighting methods with respect to covariate balance, bias, and mean squared error. Results indicate that the accuracy of all methods declined as treatment prevalence increased. While weighting produced the largest reduction in covariate imbalance, 1:1 matching with replacement provided the most unbiased treatment effect estimates. An applied example using empirical school-level data is provided to further illustrate the application and interpretation of these methods to a real-world scale-up effort. We conclude by considering the implications of propensity score methods for observational effectiveness studies with a particular focus on educational research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call