Abstract

Background and ObjectiveImage segmentation of histopathology of colorectal cancer is a core task of computer aided medical image diagnosis system. Existing convolutional neural networks generally extract multi-scale information in linear flow structures by inserting multi-branch modules, which is difficult to extract heterogeneous semantic information under multi-level and different receptive field and tough to establish context dependency among different receptive field features. MethodsTo address these issues, we propose a symmetric spiral progressive feature fusion encoder-decoder network called the Symmetric Conical Network (SC-Net). First, we design a Multi-scale Feature Extraction Block (MFEB) matching with the Symmetric Conical Network to obtain multi-branch heterogeneous semantic information under different receptive fields, so as to enrich the diversity of extracted feature information. The encoder is composed of MFEB through spiral and multi-branch arrangement to enhance context dependence between different information flow. Secondly, the information loss of contour, color and others in high-level semantic information through causally stacking MFEB, the Feature Mapping Layer (FML) is designed to map low-level features to high-level semantic features along the down-sampling branch and solve the problem of insufficient global feature extraction in deep levels. ResultsThe SC-Net was evaluated on our self-constructed colorectal cancer dataset, a publicly available breast cancer dataset and a polyp dataset. The results revealed that the mDice of segmentation reached 0.8611, 0.7259 and 0.7144. We compare our model with the state-of-art semantic segmentation UNet++, PSPNet, Attention U-Net, R2U-Net and other advanced segmentation networks. The experimental results demonstrate that we achieve the most advanced performance. ConclusionsThe results indicate that the proposed SC-Net excels in segmenting H&E stained pathology images, effectively preserving morphological features and spatial information even in scenarios with weak texture, poor contrast, and variations in appearance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.