Abstract

This work presents a method for classification and segmentation of brain tumors based on deep learning analysis of brain contrast T1 (T1c) MR images. To achieve this goal, three different deep learning networks are investigated i.e., U-Net, VGG16-Segnet, and DeepLabv3+ models. In addition, the integration of the 3D narrow-band information of the MRI volumes is imported to the input of the Convolutional Neural Network (CNN) to describe more accurately the tumor anatomy. Experimentations are performed on the MICCAI’2018 High Grade Glioma (HGG) subset of the Brain Tumor Segmentation (BraTS) Challenge, composed of 210 brain T1c MRI volumes, each of 155 cross-sections. Among the three investigated CNNs, DeepLabv3+ network achieves the highest Dice Similarity Coefficients (DSC) of 91.2%, 92.5%, 94.6% for the segmentation of the Enhancing Tumor (ET), the Tumor Core (TC), and the Whole Tumor (WT), respectively. Comparison with the related work confirms the advantages of the proposed system.

Highlights

  • BRAIN tumor is an epidemic causes of cancer death

  • In 2020, the American Cancer Society (ACS) for brain tumor estimated about 23,890 malignant tumors of the brain and around 18,020 deaths from malignant brain tumors [2]

  • Throughout literature, different methodologies have been investigated for brain tumor segmentation

Read more

Summary

INTRODUCTION

BRAIN tumor is an epidemic causes of cancer death. In USA, 700.000 people are diagnosed with brain tumors (80% benign and 20% malignant) [1]. Throughout literature, different methodologies have been investigated for brain tumor segmentation These methods can be categorized as traditional methods (discriminative or generative) and deep learning methods. A random forest classifier, based on the asymmetry-related features, achieved the best performance on BraTS 2013 database [6], i.e., DSCs of 0.87, 0.78, and 0.74 for “WT”, “TC”, and “ET” components, respectively. Kuwon et al [7] used an atlas generation method for the segmentation of multifocal tumors, using the BraTS 2013 database They have achieved accuracies of 0.86, 0.79, and 0.59 for “WT”, “TC”, and “ET” components, respectively. These methods required a high quality registration of the test images to the atlas, which is a complicated and a computationally expensive task

MATERIALS AND METHODS
Classification:
Qualitative Results
Experimentation Setting
Comparative Results
Experimental setup
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.