Abstract

With the advancement of Deep Neural Network and its increased applications, requirement of data has increased exponentially. To fulfil this requirement Deep generative models specifically Generative Adversarial Networks (GANs) have emerged as a very powerful tool. However, tuning GAN parameters are extremely difficult due to its instability and it is very prone to miss modes while training, which is termed as Mode Collapse. Mode collapse leads generators to generate the images of a particular mode while ignoring the other mode classes. In the present research, we propose a novel method to deal with Mode Collapse by using multiple generator architecture. Initially, We have shown the comparison of different GAN architectures which deals with the Mode collapse problem. We use Inception score (IS) as a evaluation metric to evaluate the performance of GAN. We started analysing GAN on a simple dataset (MNIST) using DCGAN architecture. To produce better results, the present work describes the implementation of two other different approaches. We have experimented on Wasserstein GAN (WGAN) which improves GANs training by adopting a different metric which is Wasserstein distance for calculating the distance between two probability distributions. Subsequently we have proposed the approach of multiple generator GAN architecture which uses multiple generators to provide a better solution to the missing modes problem. We evaluate our approach on several datasets (MNIST, CIFAR-10, SVHN, CelebA Face dataset) with encouraging results compared to the other existing architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call