Abstract
Generative Adversarial Networks (GANs) has emerged a great success in image processing and computer vision. Neural Architecture Search (NAS), a process of automating architectural engineering, was applied in GANs to improve backbone architectures. Currently, image generation and GAN model compression are the key tasks applied NAS in GANs. This study analysed the NAS literature on search spaces, search strategies, and performance estimation strategies applied in GANs. The results of the analysis reveal that cell-based and chain/entire structured search spaces have been used, whereas cell-based structure is the most used search space type in GANs. Reinforcement learning (RL), gradient-based, and evolutionary algorithms have been applied as search strategies to search for the optimal GAN architectures. Weight-sharing has been used as the performance estimation strategy in most of the NAS in GANs research. Furthermore, multi-objective architecture search for GANs approach, which is based on an evolutionary algorithm search strategy, was found to achieve the most outstanding results with NAS in GANs in both supervised and unsupervised image generation. Results were analysed using CIFAR-10 and STL-10 datasets in terms of Inception Score (IS) and Frechet Inception Distance (FID) score. This paper is a review study of NAS in GANs, and it outlines possible future works to stimulate other researchers to examine NAS in GANs.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have