Abstract

Deep learning architectures based on convolutional neural network (CNN) and Transformer have achieved great success in medical image segmentation. Models based on the encoder–decoder framework like U-Net have been successfully employed in many realistic scenarios. However, due to the low contrast between object and background, various shapes and scales of objects, and complex background in medical images, it is difficult to locate targets and obtain better segmentation performance by extracting effective information from images. In this paper, an encoder–decoder architecture based on spatial and channel attention modules built by Transformer is proposed for medical image segmentation. Concretely, spatial and channel attention modules based on Transformer are utilized to extract spatial and channel global complementary information at different layers in U-shape network, which is beneficial to learn the detail features in different scales. To fuse better spatial and channel information from Transformer features, a spatial and channel feature fusion block is designed for the decoder. The proposed network inherits the advantages of both CNN and Transformer with the local feature representation and long-range dependency for medical images. Qualitative and quantitative experiments demonstrate that the proposed method outperforms against eight state-of-the-art segmentation methods on five publicly medical image datasets including different modalities, such as 80.23% and 93.56% Dice value, 67.13% and 88.94% Intersection over Union (IoU) value on the Multi-organ Nucleus Segmentation (MoNuSeg) and Combined Healthy Abdominal Organ Segmentation with Computed Tomography scans (CHAOS-CT) datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call