Abstract

In this paper, we propose a novel technique for training Generative Adversarial Networks (GANs) using autoencoders. GANs, in recent years, have emerged as one of the most popular generative models. Despite their success, there are several challenges in maintaining the trade-off between diversity and quality of the generated distribution. Our idea stems from the fact that deeper layers of an autoencoder contain high-level feature representation of the input data distribution. Reusing these layers provides GAN with information about the representative characteristics of real data and hence can guide its adversarial training. We call our model Guided GAN since the autoencoder (guiding network) provides a direction to train the GAN (generative network). Guided GAN also minimizes both the forward and reverse Kullback-Leibler (KL) divergence in a single model, exploiting the complementary statistical properties of the two. We conduct extensive experiments and use various metrics for assessing the quality, diversity of generated images and convergence of the model. Our model is evaluated on two standard datasets: CIFAR-10 and CelebA demonstrating either superior or competitive performance compared to baseline GANs, especially in the earlier training stages. Our guided training procedure has been tested on different baseline GANs without any changes to their hyper-parameter configuration or architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call