Abstract

Generative adversarial networks (GANs) have recently gained popularity in artificial intelligence research due to their superior generation, enhancement and style transfer of content compared to other generative models. Introduced in 2014, GANs have been used in the fields of computer vision, natural language processing, medical applications, and cyber security, with the number of use cases rapidly growing. GANs are, however, difficult to train in practice due to their inherent high dimensionality, and the complexity associated with the adversarial learning task. Loss landscape analysis can aid in unravelling reasons for difficulty in training GANs as the analysis creates a topology of the search space. The vanilla and deep convolutional GAN architectures are examined in this study to gain a deeper understanding of their loss landscapes during training. The GAN loss landscape features are visualised through the use of loss gradient clouds (LGCs). The LGC analysis showcases the importance of volatility in the training of GANs, as a range of gradient magnitudes allows more exploration in finding an appropriate middle ground in balancing the loss objectives of the GAN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.