Abstract

Generative Adversarial Networks (GANs) play the adversarial game between two neural networks: the generator and the discriminator. Many studies treat the discriminator’s outputs as an implicit posterior distribution prior to the input image distribution. Thus, increasing the discriminator’s output dimensions can represent richer information than a single output dimension of the discriminator. However, increasing the output dimensions will lead to a very strong discriminator, which can easily surpass the generator and break the balance of adversarial learning. Solving such conflict and elevating the generation quality of GANs remains challenging. Hence, we propose a simple yet effective method to solve this conflict problem based on a stochastic selecting method by extending the flipped and non-flipped non-saturating losses in BipGAN. We organized our experiments based on the famous BigGAN model for comparison. Our experiments successfully validated our approach to strengthening the generation quality within limited output dimensions via several standard evaluation metrics and real-world datasets and achieved competitive results in the Human face generation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call