Abstract
Early diagnosis and treatment of brain tumor is critical for the recovery of the patients. However, it is challenged by the various brain anatomy structure, low image contrast and fuzzy contour. In this paper, we present a dual supervision guided attentional network for multimodal brain tumor segmentation. The backbone is a multi-encoder based U-Net. The multiple independent encoders are used to obtain individual feature representation from each modality. A dual attention fusion block is proposed to extract the most informative feature representation from different modalities. It consists of a spatial attention module and a modality attention module. Since the same brain tumor regions can be observed in the different modalities, therefore, the spatial feature representations from different modalities can provide the complementary feature representations for segmentation. To this end, a spatial attention based supervision is introduced to enable hierarchical learning of the multi-scale feature representations, and also to provide addition constraint for the segmentation decoder. In addition, an image reconstruction based another supervision is integrated to the network to regularize the encoders. The ablation experiments and the visualization results evaluated on BraTS 2019 dataset prove that the proposed method can achieve promising results.KeywordsBrain tumor segmentationFusionDeep supervisionDeep learningMRI
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.