Abstract

To investigate whether deep learning-based radiomic features extracted from preoperative multimodal MR images can provide accurate survival prediction for glioblastoma multiforme (GBM) patients. One hundred sixty-three GBM patients with overall survival (OS) data were acquired from the BRaTS2018 dataset. Each patient had three preoperative multimodal MR images: a gadolinium contrast-enhanced T1-weighted (CE-T1w), a T2-weighted (T2w), and aT2-weighted fluid-attenuated inversion recovery (T2-FLAIR). All images were co-registered, skull-stripped, and resampled to 1 mm3 voxel resolution. The patient cohort was randomly split into a training dataset of 120 patients and an independent testing dataset of 43 patients. A pre-trained convolutional neural network, VGG19, was used to extract deep learning-based features from multimodal MR images. Two sets of deep learning-based radiomic features (average-pool and fully-connected) were acquired for comparison. For each feature set, the Cox proportional hazards-model with least absolute shrinkage and selection operator (LASSO) was utilized on the training dataset to construct the radiomic signature, a linear combination of select features, for OS prediction. Ten-fold cross-validation protocol was used to tune hyper-parameters. The prognostic performance of the generated signature was assessed by computing the concordance index (C-index) on the testing dataset. The cut-point, derived from the training dataset, on the signature score was used to stratify testing patients into low-risk and high-risk groups. Kaplan-Meier survival analysis was conducted to evaluate statistical significance between two groups. The signature constructed using fully-connected features yielded a C-index of 0.571 (95% CI: 0.486-0.656), while the one constructed using average-pool features achieved a C-index of 0.665 (95% CI: 0.590-0.740). A paired t-test on C-index indicated a significant difference (p=0.013). The fully-connected signature did not achieve the significant stratification of testing patients (p=0.258, HR=1.406, 95% CI: 0.748-2.644), however the average-pool signature did (p<0.001, HR=3.260, 95% CI: 1.500-7.085). Average-pool deep features extracted from pre-operative multimodal MR images achieved better performance in OS prediction and stratification for GBM patients compared to fully-connected deep features. Our study indicated the potential of using deep learning-based biomarker for prognosis prediction of GBM patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call