Abstract

Generative Adversarial Networks (GANs) are difficult to train because of pathologies such as mode and discriminator collapse. Similar pathologies have been studied and addressed in competitive evolutionary computation by increased diversity. We study a system, Lipizzaner, that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training. We study different features of Lipizzaner’s evolutionary computation methodology. Our ablation experiments determine that communication, selection, parameter optimization, and ensemble optimization each, as well as in combination, play critical roles. Lipizzaner succumbs less frequently to critical collapses and, as a side benefit, demonstrates improved performance. In addition, we show a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell. We use an image generation problem to show that different loss function combinations result in models with better accuracy and more diversity in comparison to other existing evolutionary GAN models. Finally, Lipizzaner with multiple loss function options promotes the best model diversity while requiring a large grid size for adequate accuracy.

Highlights

  • Generative modeling aims to learn a function that describes a latent, i.e., unknown, distribution

  • The results provided by Lipizzaner variants that generate diversity only by a spatial population (BCE-Binary cross entropy (BCE), Heuristic loss (HEU)-HEU, and Mean squared error (MSE)-MSE) are significantly more competitive than E-Generative Adversarial Networks (GANs), which experiments with loss function diversity but does not evolve a population from one generation to the

  • RAND-COMBOS, which explicitly adds an additional means of we investigate the diversity of the evolved networks to see whether one network is replicated over the entire grid and neighborhoods

Read more

Summary

Introduction

Generative modeling aims to learn a function that describes a latent, i.e., unknown, distribution. The trained model can be used to generate unobserved samples. Generative Adversarial Networks (GANs) consist of two neural network models: a discriminator and a generator [Goodfellow 2016]. The training objective of the generator is to fool the discriminator with its samples, synthesized from a latent input space and a non-linear function. This forms a coupled minimization and maximization “minimax” problem. The adversarial paradigm introduces GAN training challenges. The generator serves as the generative model [Goodfellow 2016]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.