Abstract

Background and PurposePretreatment prediction of the response to neoadjuvant chemoradiotherapy (NCRT) helps to determine the subsequent plans for the patients with locally advanced rectal cancer (LARC). If the good responders (GR) and non-good responders (non-GR) can be accurately predicted, they can choose to intensify the neoadjuvant chemoradiotherapy to decrease the risk of tumor progression during NCRT and increase the chance of organ preservation. Compared with radiomics methods, deep learning (DL) may adaptively extract features from the images without the need of feature definition. However, DL suffers from limited training samples and signal discrepancy among different scanners. This study aims to construct a DL model to predict GRs by training apparent diffusion coefficient (ADC) images from different scanners.MethodsThe study retrospectively recruited 700 participants, chronologically divided into a training group (n = 500) and a test group (n = 200). Deep convolutional neural networks were constructed to classify GRs and non-GRs. The networks were designed with a max-pooling layer parallelized by a center-cropping layer to extract features from both the macro and micro scale. ADC images and T2-weighted images were collected at 1.5 Tesla and 3.0 Tesla. The networks were trained by the image patches delineated by radiologists in ADC images and T2-weighted images, respectively. Pathological results were used as the ground truth. The deep learning models were evaluated on the test group and compared with the prediction by mean ADC value.ResultsArea under curve (AUC) of receiver operating characteristic (ROC) is 0.851 (95% CI: 0.789–0.914) for DL model with ADC images (DL_ADC), significantly larger (P = 0.018, Z = 2.367) than that of mean ADC with AUC = 0.723 (95% CI: 0.637–0.809). The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of DL_ADC model are 94.3%, 68.3%, 87.4% and 83.7%, respectively. The DL model with T2-weighted images (DL_T2) produces an AUC of 0.721 (95% CI: 0.640–0.802), significantly (P = 0.000, Z = 3.554) lower than that of DL_ADC model.ConclusionDeep learning model reveals the potential of pretreatment apparent diffusion coefficient images for the prediction of good responders to neoadjuvant chemoradiotherapy.

Highlights

  • Advanced rectal cancer (LARC) is defined as rectal cancer with clinical tumor stage 3-4 or positive clinical nodal stage

  • We proposed a deep learning method to predict the response to neoadjuvant chemoradiotherapy (NCRT) by only using the pretreatment MRI data

  • Compared with the strategy that uses both pretreatment and posttreatment data, the method may predict the response to NCRT before the initiation of treatment

Read more

Summary

Introduction

Advanced rectal cancer (LARC) is defined as rectal cancer with clinical tumor stage 3-4 (cT3-cT4, tumor invades through the muscularis propria) or positive clinical nodal stage (cN+, malignant lymph nodes are detected). Some good responders (GR) may achieve pathological tumoral stage 0-1 (ypT0-1, muscularis propria is not invaded) and negative pathological nodal stage (ypN0, no malignant lymph nodes are found) after NCRT. These GRs may avoid total mesorectal excision (TME) surgery by using “wait and see” strategy or local excision to preserve organs and improve the life quality [4, 5]. This study aims to construct a DL model to predict GRs by training apparent diffusion coefficient (ADC) images from different scanners

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.