Abstract
Most state-of-the-art deep networks proposed for biomedical image segmentation are developed based on U-Net. While remarkable success has been achieved, its inherent limitations hinder it from yielding more precise segmentation. First, its receptive field is limited due to the fixed kernel size, which prevents the network from modeling global context information. Second, when spatial information captured by shallower layer is directly transmitted to higher layers by skip connections, the process inevitably introduces noise and irrelevant information to feature maps and blurs their semantic meanings. In this article, we propose a novel segmentation network equipped with a new context prior guidance (CPG) module to overcome these limitations for biomedical image segmentation, namely context prior guidance network (CPG-Net). Specifically, we first extract a set of context priors under the supervision of a coarse segmentation and then employ these context priors to model the global context information and bridge the spatial-semantic gap between high-level features and low-level features. The CPG module contains two major components: context prior representation (CPR) and semantic complement flow (SCF). CPR is used to extract pixels belonging to the same objects and hence produce more discriminative features to distinguish different objects. We further introduce deep semantic information for each CPR by the SCF mechanism to compensate the semantic information diluted during the decoding. We extensively evaluate the proposed CPG-Net on three famous biomedical image segmentation tasks with diverse imaging modalities and semantic environments. Experimental results demonstrate the effectiveness of our network, consistently outperforming state-of-the-art segmentation networks in all the three tasks. Codes are available at https://github.com/zzw-szu/CPGNet .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.