Abstract

We propose Dual Cross-Attention (DCA), a simple yet effective attention module that enhances skip-connections in U-Net-based architectures for medical image segmentation. The plain and simple skip-connection scheme in U-Net-based architectures struggles with capturing the multi-scale context, resulting in a semantic gap between encoder and decoder features. Such a semantic gap causes redundancy between low and high-level features which ultimately limits the segmentation performance. In this paper, we address this issue by sequentially capturing channel and spatial dependencies across multi-scale encoder features that adaptively combine low and high-level features in various scales to effectively bridge the semantic gap. First, the Channel Cross-Attention (CCA) extracts global channel-wise dependencies by utilizing cross-attention across channel tokens of multi-scale encoder features. Then, the Spatial Cross-Attention (SCA) module performs cross-attention to capture spatial dependencies across spatial tokens. Finally, these fine-grained encoder features are up-sampled and connected to their corresponding decoder parts to form the skip-connection scheme. Our proposed DCA module can be integrated into any encoder–decoder architecture with skip-connections such as U-Net and its variants as well as advanced architectures based on vision transformers. The experimental results using six medical image segmentation datasets demonstrate that our DCA module can consistently improve the overall segmentation performance at a slight parameter increase. Our codes are available at: https://github.com/gorkemcanates/Dual-Cross-Attention.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.