Abstract

Optimization algorithms and objective functions play an important role in the training of deep learning networks. This paper explores the impact of using various optimization algorithms and objective functions for the training of different Generative Adversarial Networks. The paper first summarizes various Generative Adversarial Networks available in the literature. Various Generative Adversarial Networks are then evaluated for different objective functions and optimization algorithms. Empirically, Generative Adversarial Networks are analyzed here based on generator loss, discriminator loss, and accuracy metrics. The training of various Generative Adversarial Networks is analyzed on the MNIST dataset. The results indicate that Adam optimization algorithm and conditional objective function is a good choice for improved training of Generative Adversarial Networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call