Abstract
Semantic segmentation is a long standing challenging issue in computer vision. In this paper, a novel method named SegGAN is proposed, in which a pre-trained deep semantic segmentation network is fitted into a generative adversarial framework for computing better segmentation masks. The composited networks are jointly fine-tuned end-to-end to get better segmentation masks. In the pre-training of Generative Adversarial Network (GAN), we try to minimize the loss between the generated images from the generator with the ground truth masks as input and the original images. Our motivation is that the learned GAN shows the relationship between the ground truth masks and the original images, thus the predicted masks of the semantic segmentation model should have the same distribution or relationship with the original images. Concretely, GAN is treated as a kind of loss for semantic segmentation to achieve better performance. Numerous experiments conducted on two publicly available datasets demonstrate the effectiveness of the proposed SegGAN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.