Abstract

With the development of deep neural networks, recent years have witnessed the increasing research interest on generative models. Specificly, Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN) have achieved impressive results in various generative tasks. VAE is well established and theoretically elegant, but tends to generate blurry samples. In contrast, GAN has shown the advantage in visual quality of generated images, but suffers the difficulty in translating a random vector into a desired high-dimensional sample. As a result, the training dynamics in GAN are often unstable and the generated samples could collapse to limited modes. In this paper, we propose a new Auto-Encoder Generative Adversarial Networks (AEGAN), which takes advantages of both VAE and GAN. In our approach, instead of matching the encoded distribution of training samples to the prior Pz as in VAE, we map the random vector into the encoded latent space by adversarial training based on GAN. Besides, we also match the decoded distribution of training samples with that from random vectors. To evaluate our approach, we make comparison with other encoder-decoder based generative models on three public datasets. The experiments with both qualitative and quantitative results demonstrate the superiority of our algorithm over the comparison generative models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call