Abstract

In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head–Neck-PET–CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of 80%, 80% and 82% for DM, LR and OS respectively on the public TCIA Head–Neck-PET–CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving 69% AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was 72%, 70% and 71% for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.

Highlights

  • In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life

  • On top of the different modalities available to capture the same anatomy with all of the information trade offs they provide such as X-ray computed tomography (CT) and positron emission tomography (PET), scans will often present difficulty to discern differences due to conditions between institutions that are not reproducible; scanner manufacturer and models, reconstruction algorithm choice, and operator handling may all contribute to this effect

  • Median follow-up from radiotherapy was 45 months An imbalanced outcome distribution is clearly observable: by the end of the study, 13% of patients reported distant metastasis, 14% presented locoregional tumor recurrence and 19% were reported deceased from any cause

Read more

Summary

Introduction

In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited This model was trained on the public Cancer Imaging Archive Head–Neck-PET–CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. On top of the different modalities available to capture the same anatomy with all of the information trade offs they provide such as X-ray computed tomography (CT) and positron emission tomography (PET), scans will often present difficulty to discern differences due to conditions between institutions that are not reproducible; scanner manufacturer and models, reconstruction algorithm choice, and operator handling may all contribute to this effect All of these variabilities compound the difficulty of modeling classification tasks on small training samples. It is important to assess the generalizability of deep learning models across institutions

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.