Abstract

BackgroundDiscriminative ability is an important aspect of prediction model performance, but challenging to assess in clustered (e.g., multicenter) data. Concordance (c)-indexes may be too extreme within small clusters. We aimed to define a new approach for the assessment of discriminative ability in clustered data.MethodsWe assessed discriminative ability of a prediction model for the binary outcome mortality after traumatic brain injury within centers of the CRASH trial. With multilevel logistic regression analysis, we estimated cluster-specific calibration slopes which we used to obtain the recently proposed calibrated model-based concordance (c-mbc) within each cluster. We compared the c-mbc with the naïve c-index in centers of the CRASH trial and in simulations of clusters with varying calibration slopes.ResultsThe c-mbc was less extreme in distribution than the c-index in 19 European centers (internal validation; n = 1716) and 36 non-European centers (external validation; n = 3135) of the CRASH trial. In simulations, the c-mbc was biased but less variable than the naïve c-index, resulting in lower root mean squared errors.ConclusionsThe c-mbc, based on multilevel regression analysis of the calibration slope, is an attractive alternative to the c-index as a measure of discriminative ability in multicenter studies with patient clusters of limited sample size.

Highlights

  • Discriminative ability is an important aspect of prediction model performance, but challenging to assess in clustered data

  • We compared the estimates with random effect estimates of the calibration intercept and slope and the c-mbc (Eq 2), respectively

  • All the analyses were done in R software, and multilevel regression analysis was done with the lme4 package [20, 21]

Read more

Summary

Introduction

Discriminative ability is an important aspect of prediction model performance, but challenging to assess in clustered (e.g., multicenter) data. We aimed to define a new approach for the assessment of discriminative ability in clustered data. Assessing the performance of a prediction model is of great practical importance [1, 2]. The c-index estimates the probability that for two randomly chosen subjects with different outcomes, the model predicts a higher risk for the subject with poorer outcome (concordance probability). The mbc at external validation is the closed form variant of the previously proposed case-mix corrected c-index [7]. The difference between the mbc at model development and the mbc at external validation indicates the change in discriminative ability attributable to the difference in case-mix heterogeneity between the development and validation data. The calibrated mbc (c-mbc)—based on predictions recalibrated to the external validation data—

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call