Abstract

While variational autoencoders (VAE) provide the theoretical basis for deep generative models, they often produce blurry images which is linked to their training objective. In this paper, we propose the Sharpened Adversarial Variational Auto-Encoder (AVAE-S) which uses an adversarial training mechanism to fine-tune the learned latent code vector of the VAE with a specialized objective function. The loss function is designed to uncover global structure as well as the local and high frequency features in VAE and leading to the smaller variance in the aggregated posterior and hence, reducing the blurriness of their generated samples. AVAE-S leverages the learned representations to the meaningful latent features by enforcing feature consistency between the model distribution and the target distribution leading to the sharpened output with better perceptual quality. Then, AVAE-S starts training a GAN network, which generator has been collapsed on the VAE’s decoder, upon that learned latent code vector. Moreover, we augment the standard VAE’s evidence lower bound objective function with other element-wise similarity measures. Our experiments show that AVAE-S achieves the state-of-the-art sample quality in the common MNIST and CelebA datasets. AVAE-S shares many of the good properties of the VAE (stable training, encoder-decoder architecture, nice latent manifold structure) while generating more realistic images, as measured by the sharpness score.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.