Abstract

Efficient and accurate medical image segmentation is necessary for pathological evaluation and disease diagnosis in clinical practice. In recent years, the U-shaped encoder–decoder structure has achieved good performance in various medical image segmentation tasks. It is a neural network model similar to the letter U, which can gradually reduce the spatial dimension of the feature map to capture high-level semantic information in the feature encoding stage, and can make up for the lack of target detail information in the feature decoding stage to decouple the final segmentation results. However, the U-shaped encoder–decoder structure still suffers from semantic asymmetry and global semantic dilution problems, which are exacerbated during decoding, due to the limitation of the framework’s fixation. In this paper, we propose a cross-level collaborative context-aware framework (C3-Net) to address the aforementioned issues in medical image segmentation. The main contributions of this research include: (i) To address the inherent problems in the U-shaped structure, we propose the C3-Net to explore the differences between cross-level contextual information effectively; (ii) To achieve detailed information preservation and semantic information enhancement, we present a Channel Scale-aware Context Enhancement (CSCE) module to enhance low-level contextual features from the global and local scales of the channel dimension; (iii) A Spatial Pyramid Context Alignment (SPCA) module is designed to extract and align pyramid features, thus obtaining accurate global contextual features at the network’s top; (iv) We propose a Cross-Level Collaborative Context Refinement (CCCR) module to enable efficient collaboration between cross-level contextual features to simultaneously achieve semantic alignment and global semantic enhancement. The outcomes of implementing the proposed method on three publicly available datasets (Synapse, ACDC, and GlaS) and one private dataset (VMICH) show that the proposed method outperforms all competing approaches on four datasets. Our C3-Net achieves new state-of-the-art performance on four image segmentation tasks with Dice scores of 85.26%, 92.10%, 87.20%, and 87.61%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call