Abstract

One-shot semantic segmentation aims to recognize unseen object regions by using the reference of only one annotated example. Many deep convolutional neural networks have achieved enormous success on this task. However, most of the existing methods only use a fixed annotated dataset to train the network. The remaining unannotated examples remain difficult to be leveraged and recognized. In this study, we propose a procedure based on the generative adversarial network to enable the one-shot semantic segmentation model for learning information from previously unknown categories. Our method contains a segmentation network that generates segmentation predictions. We then use a discriminator to differentiate the probability maps of segmentation prediction from the ground truth distribution. Consequently, we can ignore the pixels classified as fake and only use trustworthy regions as the label to train the segmentation network, thus achieving semi-supervised learning. Experimental results demonstrate the effectiveness of the proposed adversarial learning method with an average gain of 49.7% accuracy score on the PASCAL VOC 2012 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call