Abstract
Increasing the performance of a Generative Adversarial Network (GAN) requires experimentation in choosing the suitable training hyper-parameters of learning rate and batch size. There is no consensus on learning rates or batch sizes in GANs, which makes it a trial-and-error process to get acceptable output. Researchers have differing views regarding the effect of batch sizes on run time. This paper investigates the impact of these training parameters of GANs with respect to actual elapsed training time. In our initial experiments, we study the effects of batch sizes, learning rates, loss function, and optimization algorithm on training using the MNIST dataset over 30,000 epochs. The simplicity of the MNIST dataset allows for a starting point in initial studies to understand if the parameter changes have any significant impact on the training times. The goal is to analyze and understand the results of varying loss functions, batch sizes, optimizer algorithms, and learning rates on GANs and address the key issue of batch size and learning rate selection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.