Abstract

e13559 Background: Short-term cancer mortality prediction has many implications concerning care planning. An accurate prognosis allows healthcare providers to adjust care plans and take appropriate actions, such as initiating end-of-life conversations. Machine learning (ML) techniques demonstrated promising capability to support clinical decision-making via providing reliable predictions for a variety of clinical outcomes, including cancer mortality. However, the evidence has not yet been systematically synthesized and evaluated. The objective of this review was to examine the performance and risk-of-bias for ML models trained to predict short-term (≤ 12 months) cancer mortality. Methods: We identified relevant literature from five electronic databases: Ovid Medline, Ovid EMBASE, Scopus, Web of Science, and IEEE Xplore. We searched each database with predefined MeSH terms and keywords of oncology, machine learning, and mortality using AND/OR statements. Inclusion criteria included: 1) developed/validated ML models for predicting oncology patient mortality within one year using electronic health record data; 2) reported model performance within a dataset that was not used to train the models; 3) original research; 4) peer-reviewed full paper in English; 5) published before 1/10/2020. We conducted risk of bias assessment using prediction model risk of bias assessment tool (PROBAST). Results: Ten articles were included in this review. Most studies focused on predicting 1-year mortality (n = 6) for multiple types of cancer (n = 5). Most studies (n = 7) used a single metric, the area under the receiver operating characteristic curve (AUROC), to examine their models. The AUROC ranged from .69 to .91, with a median of .85. Information on samples (n = 10), resampling methods (n = 6), model tuning approaches (n = 9), censoring (n = 10), and sample size determinations (n = 10) were incomplete or absent. Six studies have a high risk of bias for the analysis domain in the PROBAST. Conclusions: The performance of ML models for short-term cancer mortality appears promising. However, most studies report only a single performance metric that obfuscates evaluation of a model’s true performance. This is especially problematic when predicting rare events such as short-term mortality. We found little-to-no information on a given model’s ability to correctly identify patients at high risk of mortality. The incomplete reporting of model development poses challenges to risk of bias assessment and reduces the confidence in the results. Our findings suggest that future studies should report comprehensive performance metrics using a standard reporting guideline, such as transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD), to ensure sufficient information for replication, justification, and adoption.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.