Abstract

AbstractLarge observational databases are often subject to missing data. As such, methods for causal inference must simultaneously handle confounding and missingness; surprisingly little work has been done at this intersection. Motivated by this, we propose an efficient and robust estimator of the causal average treatment effect from cohort studies when confounders are missing at random. The approach is based on a novel factorization of the likelihood that, unlike alternative methods, facilitates flexible modelling of nuisance functions (e.g., with state‐of‐the‐art machine learning methods) while maintaining nominal convergence rates of the final estimators. Simulated data, derived from an electronic health record‐based study of the long‐term effects of bariatric surgery on weight outcomes, verify the robustness properties of the proposed estimators in finite samples. Our approach may serve as a theoretical benchmark against which ad hoc methods may be assessed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.