Abstract

Joint distribution matching based on generative adversarial nets (GANs) is an effective method to alleviate the insufficient diversity of labeled samples in semisupervised learning. In fact, in addition to the existing samples, generated samples and corresponding predictive labels must be taken into account to further increase the diversity of labeled samples and the controllability of generated samples. However, current works have not considered this. Therefore, a semisupervised learning model with adversarial training among joint distributions is proposed. The model consists of a generator, a classifier, and three discriminators incorporated with four joint distributions of samples and labels. The theoretical research indicates that, when the model reaches equilibrium, the classifier happens to be the inference network of the generator. Hence, the controllability of the generator and the generalization ability of the classifier are mutually improved. In semisupervised classification experiments, our model achieved state-of-the-art error rates of 0.59%, 16.45%, and 4.86% on MNIST, CIFAR10, and SVHN datasets, respectively. When only 20 labels are available on MNIST dataset, the error rate dropped from the current best of 4% to 1.09%, which indicates that the model is extremely robust to the number of labels. Meanwhile, the model also shows competitiveness in semisupervised generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call