Most state-of-the-art deep networks proposed for biomedical image segmentation are developed based on U-Net. While remarkable success has been achieved, its inherent limitations hinder it from yielding more precise segmentation. First, its receptive field is limited due to the fixed kernel size, which prevents the network from modeling global context information. Second, when spatial information captured by shallower layer is directly transmitted to higher layers by skip connections, the process inevitably introduces noise and irrelevant information to feature maps and blurs their semantic meanings. In this article, we propose a novel segmentation network equipped with a new context prior guidance (CPG) module to overcome these limitations for biomedical image segmentation, namely context prior guidance network (CPG-Net). Specifically, we first extract a set of context priors under the supervision of a coarse segmentation and then employ these context priors to model the global context information and bridge the spatial-semantic gap between high-level features and low-level features. The CPG module contains two major components: context prior representation (CPR) and semantic complement flow (SCF). CPR is used to extract pixels belonging to the same objects and hence produce more discriminative features to distinguish different objects. We further introduce deep semantic information for each CPR by the SCF mechanism to compensate the semantic information diluted during the decoding. We extensively evaluate the proposed CPG-Net on three famous biomedical image segmentation tasks with diverse imaging modalities and semantic environments. Experimental results demonstrate the effectiveness of our network, consistently outperforming state-of-the-art segmentation networks in all the three tasks. Codes are available at https://github.com/zzw-szu/CPGNet .
Read full abstract