Abstract

This paper examines the synthetic control method in contrast to commonly used difference‐in‐differences (DiD) estimation, in the context of a re‐evaluation of a pay‐for‐performance (P4P) initiative, the Advancing Quality scheme. The synthetic control method aims to estimate treatment effects by constructing a weighted combination of control units, which represents what the treated group would have experienced in the absence of receiving the treatment. While DiD estimation assumes that the effects of unobserved confounders are constant over time, the synthetic control method allows for these effects to change over time, by re‐weighting the control group so that it has similar pre‐intervention characteristics to the treated group.We extend the synthetic control approach to a setting of evaluation of a health policy where there are multiple treated units. We re‐analyse a recent study evaluating the effects of a hospital P4P scheme on risk‐adjusted hospital mortality. In contrast to the original DiD analysis, the synthetic control method reports that, for the incentivised conditions, the P4P scheme did not significantly reduce mortality and that there is a statistically significant increase in mortality for non‐incentivised conditions. This result was robust to alternative specifications of the synthetic control method. © 2015 The Authors. Health Economics published by John Wiley & Sons Ltd.

Highlights

  • In the absence of randomised controlled trials, evaluations of alternative health policies and public health interventions may use evidence from natural experiments (Craig et al, 2012; Jones and Rice, 2011)

  • While the ‘gap’ between the predicted outcomes of the synthetic and the real North West before the programme started indicates the quality of the synthetic control region; the gap after the programme start can be attributed to the effect of Advancing Quality (AQ) (Figure 2 right panel)

  • This paper examines the synthetic control method in the context of an evaluation of a high profile health policy change

Read more

Summary

Introduction

In the absence of randomised controlled trials, evaluations of alternative health policies and public health interventions may use evidence from natural experiments (Craig et al, 2012; Jones and Rice, 2011). Difference-in-differences (DiD) methods are often used to estimate treatment effects in these settings, by contrasting the change in outcomes pre- and post-intervention, for the treatment and control groups. DiD assumes that any time effects (e.g. macro shocks) are common to the treatment groups under evaluation. The combination of these two assumptions is often referred to as the ‘parallel trends assumption’, which implies that without the intervention, outcomes for the treated and control groups would have followed parallel trajectories over time. The authors present simulation evidence that the choice of specification for the DiD estimation can have a major impact on the point estimates and estimated statistical significance of estimated policy effects and suggest that alternatives to DiD warrant consideration

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call