Abstract

The Grade of meningioma has significant implications for selecting treatment regimens ranging from observation to surgical resection with adjuvant radiation. For most patients, meningiomas are diagnosed radiologically, and Grade is not determined unless a surgical procedure is performed. The goal of this study is to train a novel auto-classification network to determine Grade I and II meningiomas using T1-contrast enhancing (T1-CE) and T2-Fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. Ninety-six consecutive treatment naïve patients with pre-operative T1-CE and T2-FLAIR MR images and subsequent pathologically diagnosed intracranial meningiomas were evaluated. Delineation of meningiomas was completed on both MR images. A novel asymmetric 3D convolutional neural network (CNN) architecture was constructed with two encoding paths based on T1-CE and T2-FLAIR. Each path used the same 3 × 3 × 3 kernel with different filters to weigh the spatial features of each sequence separately. Final model performance was assessed by tenfold cross-validation. Of the 96 patients, 55 (57%) were pathologically classified as Grade I and 41 (43%) as Grade II meningiomas. Optimization of our model led to a filter weighting of 18:2 between the T1-CE and T2-FLAIR MR image paths. 86 (90%) patients were classified correctly, and 10 (10%) were misclassified based on their pre-operative MRs with a model sensitivity of 0.85 and specificity of 0.93. Among the misclassified, 4 were Grade I, and 6 were Grade II. The model is robust to tumor locations and sizes. A novel asymmetric CNN with two differently weighted encoding paths was developed for successful automated meningioma grade classification. Our model outperforms CNN using a single path for single or multimodal MR-based classification.

Highlights

  • The Grade of meningioma has significant implications for selecting treatment regimens ranging from observation to surgical resection with adjuvant radiation

  • To demonstrate the individual predictive performance for different MRI sequences, T1-contrast enhancing (T1-CE) and T2-Fluid attenuated inversion recovery (FLAIR) were trained by traditional single path convolutional neural network (CNN), respectively

  • If T2-FLAIR was applied to the traditional CNN model, 31 (32%) patients were predicted correctly, and 65 (68%) were misclassified

Read more

Summary

Introduction

The Grade of meningioma has significant implications for selecting treatment regimens ranging from observation to surgical resection with adjuvant radiation. Improved performance has been reported for solving complex problems, including image colorization, classification, segmentation, and pattern detection using deep learning. Among these methods, convolutional neural networks (CNN’s) have been extensively studied and shown to improve prediction performance using large amounts of pre-labeled d­ ata[7–10]. With the help of deep learning, radiological grading of meningiomas can guide treatment options that include resection, radiation, or observation, in patients who do not undergo pathologic diagnosis. Banzato et al separately tested two networks on T1-CE MR images and apparent diffusion coefficient (ADC) maps for meningioma classification and achieved the best performance of grade prediction with an accuracy of 0.93 using Inception deep CNN on ADC ­maps[15]. Motivated by the success of asymmetric learning from two different ­kernels[17], an asymmetric learning architecture from multimodal MR images was built using an additional path to predict meningioma grades

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.