Abstract

A hallmark of variational autoencoders (VAEs) for text processing is their combination of powerful encoder-decoder models, such as LSTMs, with simple latent distributions, typically multivariate Gaussians. These models pose a difficult optimization problem: there is an especially bad local optimum where the variational posterior always equals the prior and the model does not use the latent variable at all, a kind of “collapse” which is encouraged by the KL divergence term of the objective. In this work, we experiment with another choice of latent distribution, namely the von Mises-Fisher (vMF) distribution, which places mass on the surface of the unit hypersphere. With this choice of prior and posterior, the KL divergence term now only depends on the variance of the vMF distribution, giving us the ability to treat it as a fixed hyperparameter. We show that doing so not only averts the KL collapse, but consistently gives better likelihoods than Gaussians across a range of modeling conditions, including recurrent language modeling and bag-of-words document modeling. An analysis of the properties of our vMF representations shows that they learn richer and more nuanced structures in their latent representations than their Gaussian counterparts.

Highlights

  • Recent work has established the effectiveness of deep generative models for a range of tasks in NLP, including text generation (Hu et al, 2017; Yu et al, 2017), machine translation (Zhang et al, 2016), and style transfer (Shen et al, 2017; Zhao et al, 2017a)

  • We propose to use the von MisesFisher distribution rather than Gaussian for our latent variable. vMF places a distribution over the unit hypersphere governed by a mean parameter μ and a concentration parameter κ

  • We follow the implementation reported in Bowman et al (2016) where the KL term weight is annealed for the Gaussian variational autoencoders (VAEs); vMF VAE works well without weight annealing

Read more

Summary

Introduction

Recent work has established the effectiveness of deep generative models for a range of tasks in NLP, including text generation (Hu et al, 2017; Yu et al, 2017), machine translation (Zhang et al, 2016), and style transfer (Shen et al, 2017; Zhao et al, 2017a). Variational autoencoders, which have been explored in past work for text modeling (Miao et al, 2016; Bowman et al, 2016), In this paper, we propose to use the von MisesFisher (vMF) distribution rather than Gaussian for our latent variable. VMF places a distribution over the unit hypersphere governed by a mean parameter μ and a concentration parameter κ. Since the KL divergence only depends on κ, we can structurally prevent the KL collapse and make our model’s optimization problem easier. We show that this approach is more robust than trying to flexibly learn κ, and a wide range of settings for fixed κ lead to good performance. Our model systematically achieves better log likelihoods than analogous Gaussian models while having higher KL divergence values, showing that it more successfully makes use of the latent variables at the end of training

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.