Abstract

The segmentation of the brain Magnetic Resonance (MR) images plays an essential role in neuroimaging research and clinical settings. Currently, deep learning combined with prior knowledge and attention mechanism is intensively implemented to solve the brain tissue segmentation task because of its superior performance. However, there are still two problems: firstly, some prior knowledge is difficult to obtain; secondly, incorrect attention is easy to produce in self-attention mechanism. To address these two issues, a novel dual encoder residual U-Net based on texture features and background knowledge, namely DE-ResUnet, is proposed in this work. In DE-ResUnet, the dual encoders for T1-weighted image and texture features are combined to learn hidden additional information. The introduction of channel attention mechanism (CAM) into two encoder and decoder paths facilitates the model to extract more useful informative features. Moreover, we design a strengthen module to refine the coarse segmentation, which can focus on brain tissue regions guided by background knowledge. We evaluate our proposed method on BrainWeb, OASIS-1 and CANDI datasets. The experimental results show that the proposed DE-ResUnet network achieves the accurate segmentation superior to that of several state-of-the-art methods. We also evaluate DE-ResUnet on the BraTS 2020 dataset and achieve good segmentation results. These experiments demonstrate that DE-ResUnet can not only segment normal brain MR images accurately, but also locate the area of the lesion in abnormal images. Our code is freely available at https://github.com/LiangWUSDU/DE-ResUnet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.