Abstract

Brain tumor segmentation plays a substantial role in Medical Image Analysis (MIS). In this regard, automatic segmentation methods facilitate precise and efficient segmentation, significantly contributing to diagnosis and treatment planning in medical applications. Recently, several Deep Learning-based architectures have been proposed to revolutionize the MIS field. Particularly, the combination of Convolution Neural Networks (CNNs) and Transformers has greatly enhanced and developed segmentation results. Moreover, the Attention mechanism in Transformers allows the modeling of long-range contextual features extracted from CNNs' encoder part. This paper proposes a hybrid advanced 3D model for brain tumor segmentation using multi-modal magnetic resonance images. The model benefits from the features extracted from the encoder of 3DU-Net and V-Net architectures at each depth. Then, a concatenation between these features and their fusion is carried out at each decoder depth to build new significant features followed by a 3D convolution layer and Transformers block for more contextual information. In addition, a final convolution block is applied to get the segmented tumor. To this end, the model is evaluated on the BraTS 2020 dataset to segment different sub-regions of brain tumors. The obtained results demonstrate the effectiveness of the proposed model in terms of dice similarity coefficient (DSC) and Hausdorff Distance (HD). For DSC, 91.95% and 82.80% and 81.70% for Whole Tumor(WT), Tumor Core (TC), and Enhancing Tumor(ET), respectively are archived, while for HD, 4.9 mm, 6.0 mm and 3.8 mm for WT, TC and ET are accomplished.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call