Abstract

Background: Two integrated maternity care policies implemented in the Netherlands in recent years have been continuously evaluated to create a learning evaluation as part of the implementation process.
 Methods: In this presentation, we will zoom in on the quasi-experimental methods we use for the evaluation of integrated maternity care policies. This evaluation targets involved disciplines, e.g. care providers in the medical and social domain, policy makers and patients. The integrated maternity care organizations were involved in the design of the learning evaluation.
 Difference-in-differences is an often used quasi-experimental design, derived from economic evaluations, and well-suited for policy evaluations.[1] When carefully modelling the treatment and control group, effects of the integrated care policies can be evaluated with difference-in-differences based on observational data.[2] 
 In the Netherlands, we created an anonymized data-infrastructure focused on maternity care and early life. This data-infrastructure contains nation-wide, routine-collected observational data from electronic health records, claims data and background characteristics linked on an individual level. We use this data-infrastructure in the evaluation of integrated maternity care policies with difference-in-differences that controls for unobservable confounding variables, e.g. level of collaboration in maternity care regions.
 We used this data-infrastructure to evaluate bundled payments in maternity care. This policy was implemented in 2017 on an experimental and voluntary basis to enhance integrated care between the different maternity care providers. Difference-in-difference analysis was used to evaluate the effects of bundled payments on child and maternal outcomes and healthcare costs. Furthermore, we evaluated the promising start program that aims to improve the first 1000 days of every child, in particular of vulnerable pregnancies, by increasing collaboration between both the medical and social domain.
 Results: We will describe the process of creating our national-level observational data-infrastructure and its contents. Secondly, we will indicate how similar processes could be initiated in other countries and other areas of healthcare. We will then discuss in more detail how we evaluated the two integrated maternity care policy interventions: bundled payments and the promising start program; and present preliminary results of these evaluations. For example, in the evaluation of bundled payments, we found integrated maternity care organizations (intervention group) reached cost savings compared to organizations without integrated care contracts and bundled payments (control group). We will discuss our findings from these evaluations in more detail as an illustration for other integrated care policy evaluations using observational data. 
 Conclusion: Opportunities and limitations of a nationwide, observational data-infrastructure to evaluate integrated care policies via causal inference methods will be shared and discussed with the audience. Next to this, we will discuss and summarize several lessons learned and challenges that we encountered during the evaluations of integrated care policies. Additionally, we will also touch on the added value and possibilities to integrate qualitative research in interpreting the quantitative data and outcomes based on observational data. 
 [1] Hernán MA. Methods of Public Health Research - Strengthening Causal Inference from Observational Data. N Engl J Med. 2021;385(15):1345-8
 [2] Angrist J, Pischke JS. Princeton University Press; Princeton: 2009. Mostly Harmless Econometrics: An Empiricist’s Companion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call