Abstract

Radiographic imaging is routinely used to evaluate treatment response in solid tumors. Current imaging response metrics do not reliably predict the underlying biological response. Here, we present a multi-task deep learning approach that allows simultaneous tumor segmentation and response prediction. We design two Siamese subnetworks that are joined at multiple layers, which enables integration of multi-scale feature representations and in-depth comparison of pre-treatment and post-treatment images. The network is trained using 2568 magnetic resonance imaging scans of 321 rectal cancer patients for predicting pathologic complete response after neoadjuvant chemoradiotherapy. In multi-institution validation, the imaging-based model achieves AUC of 0.95 (95% confidence interval: 0.91–0.98) and 0.92 (0.87–0.96) in two independent cohorts of 160 and 141 patients, respectively. When combined with blood-based tumor markers, the integrated model further improves prediction accuracy with AUC 0.97 (0.93–0.99). Our approach to capturing dynamic information in longitudinal images may be broadly used for screening, treatment response evaluation, disease monitoring, and surveillance.

Highlights

  • Radiographic imaging is routinely used to evaluate treatment response in solid tumors

  • We trained a deep learning model to predict pathologic complete response (pCR) based on pre-treatment and post-treatment magnetic resonance images (MRI) and performed independent testing in both internal and external validation cohorts (Fig. 1b)

  • In order to effectively capture the dynamic information contained in longitudinal images, we proposed a multi-task learning framework with a deep neural network architecture (3D RP-Net)

Read more

Summary

Introduction

Radiographic imaging is routinely used to evaluate treatment response in solid tumors. We present a multi-task deep learning approach that allows simultaneous tumor segmentation and response prediction. Most studies are focused on disease detection and diagnosis[6,7,8,9,10,11,12], by analyzing images acquired at one time point during patient care This approach is inherently limited for response prediction purposes, because it does not take the therapy-induced changes into consideration. It has been challenging to combine tumor segmentation and response prediction, which were traditionally treated as separate problems in medical image analysis Integration of these interconnected tasks in a unified model may improve the prediction performance. We propose a multi-task deep learning approach to predict treatment response and test the model in multi-institution cohorts of rectal cancer patients. We show that integration of the two tasks in one network coupled with incorporation of change information in longitudinal images improves accuracy for response prediction

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call