Abstract

Image processing has long been a focal point of research, offering avenues to enhance image clarity and transfer image features. Over the past decade, Generative Adversarial Networks (GANs) have played a pivotal role in the field of image conversion. This study delves into the world of GANs, focusing on the SuperstarGAN model and its optimization techniques. SuperstarGAN, an evolution of the well-known StarGAN, excels in multi-domain image-to-image conversion, overcoming limitations and offering versatility. To better understand its optimization, this study explored the effects of different optimizers, such as Adam, SGD, and Nadam, on SuperstarGAN's performance. Using the CelebA Face Dataset with 200 million images and 40 features, this study conducted experiments to compare these optimizers. The results revealed that while SGD and Nadam can achieve comparable results to Adam, they require more iterations and careful tuning, with SGD showing slower convergence. Nadam, with its oscillatory nature, shows promise but requires proper learning rate adjustments. This research sheds light on the critical role of optimizer choice in training SuperstarGAN. Adam emerges as the most efficient and stable option, but further exploration of Nadam's potential is warranted. This study contributes to advancing the understanding of optimization techniques for generative adversarial networks, with implications for high-quality facial image generation and beyond.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call