Abstract

High-grade gliomas are the most aggressive malignant brain tumors. Accurate pre-operative prognosis for this cohort can lead to better treatment planning. Conventional survival prediction based on clinical information is subjective and could be inaccurate. Recent radiomics studies have shown better prognosis by using carefully-engineered image features from magnetic resonance images (MRI). However, feature engineering is usually time consuming, laborious and subjective. Most importantly, the engineered features cannot effectively encode other predictive but implicit information provided by multi-modal neuroimages. We propose a two-stage learning-based method to predict the overall survival (OS) time of high-grade gliomas patient. At the first stage, we adopt deep learning, a recently dominant technique of artificial intelligence, to automatically extract implicit and high-level features from multi-modal, multi-channel preoperative MRI such that the features are competent of predicting survival time. Specifically, we utilize not only contrast-enhanced T1 MRI, but also diffusion tensor imaging (DTI) and resting-state functional MRI (rs-fMRI), for computing multiple metric maps (including various diffusivity metric maps derived from DTI, and also the frequency-specific brain fluctuation amplitude maps and local functional connectivity anisotropy-related metric maps derived from rs-fMRI) from 68 high-grade glioma patients with different survival time. We propose a multi-channel architecture of 3D convolutional neural networks (CNNs) for deep learning upon those metric maps, from which high-level predictive features are extracted for each individual patch of these maps. At the second stage, those deeply learned features along with the pivotal limited demographic and tumor-related features (such as age, tumor size and histological type) are fed into a support vector machine (SVM) to generate the final prediction result (i.e., long or short overall survival time). The experimental results demonstrate that this multi-model, multi-channel deep survival prediction framework achieves an accuracy of 90.66%, outperforming all the competing methods. This study indicates highly demanded effectiveness on prognosis of deep learning technique in neuro-oncological applications for better individualized treatment planning towards precision medicine.

Highlights

  • Histopathological types, patient’s age, physical status, patient performance status and neurological disability[1,2,3]

  • The T1 images were separately visually evaluated by three raters, only consensus result were used for decision making; (3) with irrelevant death causes during follow-ups which may confound overall survival (OS) estimation; and (4) with inadequate follow-up period to determine the label of long or short OS

  • The performance measures averaged over all the folds are reported in Table 2, including accuracy (ACC), sensitivity (SEN), specificity (SPE), positive predictive rate (PPR), and negative predictive rate (NPR)

Read more

Summary

Introduction

Histopathological types, patient’s age, physical status, patient performance status and neurological disability[1,2,3]. Generally corresponding to the short OS4, recent molecular pathological studies have shown that the higher grade glioma patients with the same tumor histopathology may have significantly different OS5 These findings indicate that the traditional prognosis prediction based on the simple clinical and demographical information may not be adequately accurate[6,7,8,9]. We propose a novel learning based method to predict OS of high-grade glioma patients: (1) We first automatically learn the high-level features for patches from multi-modal images (i.e., using the popularly acquired presurgical images, including contrast-enhanced T1 MRI, DTI and resting-state fMRI (rs-fMRI)) by training a supervised deep learning model at patch level; (2) We train a binary support vector machine (SVM) model[36,37] based on the automatically extracted semantic features (i.e., by concatenating patch-level features together to form patient-level features for each patient) to predict the OS for each patient. We (i) extend our method by introducing an additional modality (fTensor-fMRI) to enhance the multi-modal multi-channel feature learning by providing more supplementary information, (ii) explore the impact of using the multi-modality information, (iii) investigate the impact of convolutional kernels: comparing 3D convolutional kernels with 2D convolutional kernels, (iv) compare the proposed supervised learned features with unsupervised extracted features on the classification task, and (vi) test on an extra 25-subject dataset

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.