Abstract

Since generative adversarial network (GAN) can learn data distribution and generate new samples based on the learned data distribution, it has become a research hotspot in the area of deep learning and cognitive computation. The learning of GAN heavily depends on a large set of training data. However, in many real-world applications, it is difficult to acquire a large number of data as needed. In this paper, we propose a novel generative adversarial network called ML-CGAN for generating authentic and diverse images with few training data. Particularly, ML-CGAN consists of two modules: the conditional generative adversarial network (CGAN) backbone and the meta-learner structure. The CGAN backbone is applied to generate images, while the meta-learner structure is an auxiliary network to provide deconvolutional weights for the generator of the CGAN backbone. Qualitative and quantitative experimental results on the MNIST, Fashion MNIST, CelebA and CIFAR-10 data sets demonstrate the superiority of ML-CGAN over state-of-the-art models. Specifically, the results show that the meta-learner structure can learn prior knowledge and transfer it to the new tasks, which is beneficial for generating authentic and diverse images in the new tasks with few training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call