Abstract

Accurate segmentation of medical images is vital for disease detection and treatment. Convolutional Neural Networks (CNN) and Transformer models are widely used in medical image segmentation due to their exceptional capabilities in image recognition and segmentation. However, CNNs often lack an understanding of the global context and may lose spatial details of the target, while Transformers struggle with local information processing, leading to reduced geometric detail of the target. To address these issues, this research presents a Global-Local Fusion network model (GLFUnet) based on the U-Net framework and attention mechanisms. The model employs a dual-branch network that utilizes ConvNeXt and Swin Transformer to simultaneously extract multi-level features from pathological images. It enhances ConvNeXt’s local feature extraction with spatial and global attention up-sampling modules, while improving Swin Transformer’s global context dependency with channel attention. The Attention Feature Fusion module and skip connections efficiently merge local detailed and global coarse features from CNN and Transformer branches at various scales. The fused features are then progressively restored to the original image resolution for pixel-level prediction. Comprehensive experiments on datasets of stomach and liver cancer demonstrate GLFUnet’s superior performance and adaptability in medical image segmentation, holding promise for clinical analysis and disease diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call