Abstract

The generalization performance of a risk prediction model can be evaluated by its calibration, which measures the agreement between predicted and observed outcomes on external validation data. Here, we propose methods for assessing the calibration of discrete time‐to‐event models in the presence of competing risks. Specifically, we consider the class of discrete subdistribution hazard models, which directly relate the cumulative incidence function of one event of interest to a set of covariates. We apply the methods to a prediction model for the development of nosocomial pneumonia. Simulation studies show that the methods are strong tools for calibration assessment even in scenarios with a high censoring rate and/or a large number of discrete time points.

Highlights

  • IntroductionRisk prediction models have become an indispensable tool for decision making in applied research

  • Over the past decade, risk prediction models have become an indispensable tool for decision making in applied research

  • Based on the binary representation of the subdistribution hazard model in (5) and (7), we propose to adapt the recalibration framework by Cox (1958) as follows: Assuming that calibration assessments are again based on a validation sample (Tm, ∆m, m, xm), m = 1, . . . , N, we propose to fit a logistic regression model of the form log λ1(t|xm) 1 − λ1(t|xm)

Read more

Summary

Introduction

Risk prediction models have become an indispensable tool for decision making in applied research. Popular examples include models for diagnosis and prognosis in the health sciences, where risk prediction is used, e.g., for screening and therapy decisions (Steyerberg, 2009; Moons et al, 2012b; Liu et al, 2014) and models for risk assessment in ecological research, which have become an established tool to quantify and forecast the ecological impact of technology and development (Gibbs, 2011). A key aspect in the development of risk prediction models is the validation of generalization performance This task, which is usually performed by applying a previously derived candidate prediction model to one ore more sets of independent external validation data, has been subject to extensive methodological research (Moons et al, 2012a; Steyerberg and Vergouwe, 2014; Harrell, 2015; Steyerberg and Harrell, 2016; Alba et al, 2017). Alternative techniques that involve decision analytic measures include, among others, net benefit analysis (Vickers et al, 2016), decision curve analysis (Vickers and Elkin, 2006) and relative utility curve analysis (Baker et al, 2009; Kerr and Janes, 2017)

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.