Abstract

Glioblastoma (GBM) and brain metastases (BMs) are the two most common malignant brain tumors in adults. Magnetic resonance imaging (MRI) is a commonly used method for screening and evaluating the prognosis of brain tumors, but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited. In recent years, deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system. This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases (SBMs), and to further explore the impact of multimodality data fusion on classification tasks. Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed. First, structural T1-weight, T1C-weight, and T2-weight were selected as 3 inputs to the entire model, regions of interest (ROIs) were manually delineated on the registered three modal MR images, and multimodality radiomics features were obtained, dimensions were reduced using a random forest (RF)-based feature selection method, and the importance of each feature was further analyzed. Secondly, we used the method of contrast disentangled to find the shared features and complementary features between different modal features. Finally, the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities. The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs. Furthermore, compared with single-modal data, the multimodal fusion models using machine learning algorithms such as support vector machine (SVM), Logistic regression, RF, adaptive boosting (AdaBoost), and gradient boosting decision tree (GBDT) achieved significant improvements, with area under the curve (AUC) values of 0.974, 0.978, 0.943, 0.938, and 0.947, respectively; our comparative disentangled multi-modal MR fusion method performs well, and the results of AUC, accuracy (ACC), sensitivity (SEN) and specificity(SPE) in the test set were 0.985, 0.984, 0.900, and 0.990, respectively. Compared with other multi-modal fusion methods, AUC, ACC, and SEN in this study all achieved the best performance. In the ablation experiment to verify the effects of each module component in this study, AUC, ACC, and SEN increased by 1.6%, 10.9% and 15.0%, respectively after 3 loss functions were used simultaneously. A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call