Abstract

Semantic segmentation of colon gland is notoriously challenging due to their complex texture, huge variation, and the scarcity of training data with accurate annotations. It is even hard for experts, let alone computer-aided diagnosis systems. Recently, some deep convolutional neural networks (DCNN) based methods have been introduced to tackle this problem, achieving much impressive performance. However, these methods always tend to miss segmented results for the important regions of colon gland or make a wrong segmenting decision.In this paper, we address the challenging problem by proposed a novel framework through conditional generative adversarial network. First, the generator in the framework is trained to learn a mapping from gland colon image to a confidence map indicating the probabilities of being a pixel of gland object. The discriminator is responsible to penalize the mismatch between colon gland image and the confidence map. This additional adversarial learning facilitates the generator to produce higher quality confidence map. Then we transform the confidence map into a binary image using a fixed threshold to fulfill the segmentation task. We implement extensive experiments on the public benchmark MICCAI gland 2015 dataset to verify the effectiveness of the proposed method. Results demonstrate that our method achieve a better segmentation result in terms of visual perception and two quantitative metrics, compared with other methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.