Abstract

BACKGROUNDWith spinal surgery rates increasing in North America, models that are able to accurately predict which patients are at greater risk of developing complications are highly warranted. However, the previously published methods which have used large, multi-centre databases to develop their prediction models have relied on the receiver operator characteristics curve with the associated area under the curve (AUC) to assess their model's performance. Recently, it has been found that a precision-recall curve with the associated F1-score could provide a more realistic analysis for these models. PURPOSETo develop a logistic regression (LR) model for the prediction of complications following posterior lumbar spine surgery and to then assess for any difference in performance of the model when using the AUC versus the F1-score. STUDY DESIGNRetrospective review of a prospective cohort. PATIENT SAMPLEThe American College of Surgeons National Surgical Quality Improvement Program (NSQIP) registry was used. All patients that underwent posterior lumbar spine surgery between 2005 to 2016 with appropriate data were included. OUTCOME MEASURESBoth the AUC and F1-score were utilized to assess the prognostic performance of the prediction model. METHODSIn order to develop the LR model used to predict a complication during or following spine surgery, 19 variables were selected by three orthopedic spine surgeons from the NSQIP registry. Two datasets were developed for this analysis: (1) an imbalanced dataset, which was taken directly from the NSQIP registry, and (2) a down-sampled set. The purpose of the down-sampled set was to balance the data in order to evaluate whether balancing the data had an effect on model performance. The AUC and F1-score were applied to both of these datasets. RESULTSWithin the NSQIP database, 52,787 spine surgery cases were identified of which only 10% of these cases had complications during surgery. Applying the LR model showed a large difference between the AUC (0.69) and the F1 score (0.075) on the imbalanced dataset. However, no major differences existed between the AUC and F1-score when the data was balanced and the LR model was reapplied (0.69 and 0.62, AUC and F1-score, respectively). CONCLUSIONSThe F1-score detected a drastically lower performance for the prediction of complications when using the imbalanced data, but detected a performance similar to the AUC level when balancing techniques were utilized for the dataset. This difference is due to a low precision score when many false positive classifications are present, which is not identified when using the AUC value. This lowers the utility of the AUC score, as many of the datasets used in medicine are imbalanced. Therefore, we recommend using the F1-score on large, prospective databases when the data is imbalanced with a large amount of true negative classifications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call