Abstract

U-Net has demonstrated strong performance in the field of medical image segmentation and has been adapted into various variants to cater to a wide range of applications. However, these variants primarily focus on enhancing the model's feature extraction capabilities, often resulting in increased parameters and floating point operations (Flops). In this paper, we propose GA-UNet (Ghost and Attention U-Net), a lightweight U-Net for medical image segmentation. GA-UNet consists mainly of lightweight GhostV2 bottlenecks that reduce redundant information and Convolutional Block Attention Modules that capture key features. We evaluate our model on four datasets, including CVC-ClinicDB, 2018 Data Science Bowl, ISIC-2018, and BraTS 2018 low-grade gliomas (LGG). Experimental results show that GA-UNet outperforms other state-of-the-art (SOTA) models, achieving an F1-score of 0.934 and a mean Intersection over Union (mIoU) of 0.882 on CVC-ClinicDB, an F1-score of 0.922 and a mIoU of 0.860 on the 2018 Data Science Bowl, an F1-score of 0.896 and a mIoU of 0.825 on ISIC-2018, and an F1-score of 0.896 and a mIoU of 0.853 on BraTS 2018 LGG. Additionally, GA-UNet has fewer parameters (2.18M) and lower Flops (4.45G) than other SOTA models, which further demonstrates the superiority of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call