Abstract

Inverse mapping of the Generative Adversarial Networks (GANs) which projects data to latent space have been recently introduced, and it is shown that the inverse mapping models trained by the bidirectional adversarial learning can enable novel and practical operations including interpolation between real data. However, existing techniques still do not ensure the consistent mapping between the data and their latent representation so that the models are hardly converged throughout training steps. Our discussion begins with empirical investigations on the inconsistency issue of the prior techniques, and we further propose a novel adversarial learning method, Pseudo Conditional Bidirectional GAN (PC-BiGAN), for training the inverse mapping of GANs with a high degree of consistency and similarity-awareness. Our models are specifically guided by the pseudo conditions defined by the proximity relationship among data in unsupervised learned feature space. We demonstrate that our novel bidirectional adversarial learning frameworks improve the performance in sample reconstruction, generation, and interpolation.

Highlights

  • Generative Adversarial Networks (GANs) [1] have been recognized as one of the most powerful frameworks to learn the data generating distributions

  • In this paper, we have proposed a novel pseudo conditional bidirectional adversarial learning framework to train inverse mapping of GANs so that the resulting latent space can reflect the similarity among data and exploit the semantic juice

  • While the prior techniques hardly provide consistent mapping between the data and their latent representations, we guide the models to have predefined the global structure of the latent space formed by the proximity relationship among data in unsupervised learned feature space

Read more

Summary

Introduction

Generative Adversarial Networks (GANs) [1] have been recognized as one of the most powerful frameworks to learn the data generating distributions. In GAN frameworks, a generative model, generator, is trained to imitate real data by learning a mapping from a latent distribution to the data space. An adversarial model, discriminator, is concurrently trained to distinguish between generated and real samples. Training two adversarial models is formulated as a two-player minimax game and leads the generator to nicely approximate the real data distribution when the discriminator becomes difficult to distinguish between true and fake samples. We use a distribution of data x ∈ X as PX (x). The GAN framework is consist of two parametric models: a generator G and a discriminator D. The discriminator distinguishes whether a particular x is drawn from PX (x) or PZ (z)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call