Abstract

With the rise of deep learning, the U-Net network, based on a U-shaped architecture and skip connections, has found widespread application in various medical image segmentation tasks. However, the receptive field of the standard convolution operation is limited, because it is difficult to achieve global and long-distance semantic information interaction. Inspired by the advantages of ConvNext and Neighborhood Attention (NA), we propose CAT-Unet in this study to address the aforementioned challenges. We effectively reduce the number of parameters by utilizing large kernels and depthwise separable convolutions. Meanwhile, we introduce a Coordinate Attention (CA) module, which enables the model to learn more comprehensive and contextual information from surrounding regions. Furthermore, we introduce Skip-NAT (Neighborhood Attention Transformer) as the main algorithmic framework, replacing U-Net's original skip-connection layers, to lessen the impact of shallow features on network efficiency. Experimental results show that CAT-Unet achieves better segmentation results. On the ISIC2018 dataset, the best results for Dice (Dice Coefficient), IoU (Intersection over Union), and HD (Hausdorff Distance) are 90.26%, 83.58%, and 4.259, respectively. For the PH2 dataset, the best Dice, IoU, and HD results are 96.49%, 91.81%, and 3.971, respectively. Finally, on the DSB2018 dataset, the best Dice, IoU, and HD results are 94.58%, 88.78%, and 3.749, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.