Abstract

Medical image segmentation has been widely studied, and many methods have been proposed. Among the existing methods, U-Net and its variants have achieved a promising performance. However, these methods miss certain areas because they only generate fixed-scale receptive fields in each layer of the encoder and cannot establish rich contextual dependencies on the fusion features in the decoder. To solve these problems, this paper proposes a multi-scale contextual dual attention learning network (named MCDALNet) to capture multi-scale information and the dependencies of spatial and channel features. MCDALNet contains two components: an encoder with three multi-scale contextual learning (MCL) modules and a decoder with three dual attention modules. The MCL module extracts multi-scale context information from low-level features through the split-transform-merge-residual architecture. The dual attention module consists of a position attention sub-module and a channel attention submodule, which improve the feature representation and help the medical image segmentation. The position attention submodule captures spatial dependencies by learning similar spatial features, and the channel attention sub-module captures channel dependencies by learning relevant features on the channel maps. Experiment results show that our approach achieves significant improvement in medical image segmentation and outperforms the representative deep learning models on public datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.