Abstract

Semantic segmentation is a long standing challenging issue in computer vision. In this paper, a novel method named SegGAN is proposed, in which a pre-trained deep semantic segmentation network is fitted into a generative adversarial framework for computing better segmentation masks. The composited networks are jointly fine-tuned end-to-end to get better segmentation masks. In the pre-training of Generative Adversarial Network (GAN), we try to minimize the loss between the generated images from the generator with the ground truth masks as input and the original images. Our motivation is that the learned GAN shows the relationship between the ground truth masks and the original images, thus the predicted masks of the semantic segmentation model should have the same distribution or relationship with the original images. Concretely, GAN is treated as a kind of loss for semantic segmentation to achieve better performance. Numerous experiments conducted on two publicly available datasets demonstrate the effectiveness of the proposed SegGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call