Abstract

Brain tumor segmentation from Magnetic Resonance Imaging is essential for early diagnosis and treatment planning for brain cancers in clinical practice. However, existing brain tumor segmentation methods cannot sufficiently learn high-quality feature information for segmentation. To address this issue, a modality-level cross-connection and attentional feature fusion based deep neural network is proposed for multi-modal brain tumor segmentation. The proposed method can not only locate the whole tumor region but also can accurately segment the sub-tumor regions. The proposed network architecture is a multi-encoder based 3D U-Net. Inspired by the characteristics of multi-modalities, a modality-level cross-connection (MCC) is first proposed to take advantage of the complementary information between the related modalities. Moreover, to enhance the feature learning capacity of the network, the attentional feature fusion module (AFFM) is proposed to fuse the multi-modalities as well as to extract the useful feature representation for segmentation. It consists of two components: multi-scale spatial feature fusion (MSFF) block and dual-path channel feature fusion (DCFF) block. They aim at learning multi-scale spatial contextual information and the channel-wise feature information to improve the segmentation accuracy. Also, the proposed fusion module can be easily integrated into other fusion models and deep neural network architectures. Comprehensive experiments evaluated on the BraTS 2018 dataset demonstrate that the proposed network architecture can effectively improve the brain tumor segmentation performance when compared with the baseline methods and the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.