Abstract

Abstract: Medical image segmentation is clinically important in medical diagnosis as it permits superior lesion detection in medical diagnosis to help physicians assist in treatment. Vision Transformer (ViT) has achieved remarkable results in computer vision and has been used for image segmentation tasks, but the potential in medical image segmentation remains largely unexplored with the special characteristics of medical images. Moreover, ViT based on multi-head self-attention (MSA) converts the image into a one-dimensional sequence, which destroys the two-dimensional structure of the image. Therefore, we propose VA-TransUNet, which combines the advantages of Transformer and Convolutional Neural Networks (CNN) to capture global and local contextual information and consider the features of channel dimensionality. Transformer based on visual attention is adopted, it is taken as the encoder, CNN is used as the decoder, and the image is directly fed into the Transformer. The key of visual attention is the large kernel attention (LKA), which is a depth-wise separable convolution that decomposes a large convolution into various convolutions. Experiment on Synapse of abdominal multi-organ (Synapse) and Automated Cardiac Diagnosis Challenge (ACDC) datasets demonstrate that we proposed VA-TransUNet outperforms the current the-state-of-art networks. The codes and trained models will be publicly and available at https://github.com/BeautySilly/VA-TransUNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call