Abstract

In national hospital databases, certain prognostic factors cannot be taken into account. The main objective was to estimate the performance of two models based on two databases: the Epithor clinical database and the French hospital database. For each of the two databases, we randomly sampled a training dataset with 70% of the data and a validation dataset with 30%. The performance of the models was assessed with the Brier score, the area under the receiver operating characteristic (AUC ROC) curve and the calibration of the model. For Epithor and the hospital database, the training dataset included 10,516 patients (with resp. 227 (2.16%) and 283 (2.7%) deaths) and the validation dataset included 4507 patients (with resp. 93 (2%) and 119 (2.64%) deaths). A total of 15 predictors were selected in the models (including FEV1, body mass index, ASA score and TNM stage for Epithor). The Brier score values were similar in the models of the two databases. For validation data, the AUC ROC curve was 0.73 [0.68-0.78] for Epithor and 0.8 [0.76-0.84] for the hospital database. The slope of the calibration plot was less than 1 for the two databases. This work showed that the performance of a model developed from a national hospital database is nearly as good as a performance obtained with Epithor, but it lacks crucial clinical variables such as FEV1, ASA score, or TNM stage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.