Abstract

Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious “posterior collapse” issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the “posterior collapse” issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, including language modeling, unaligned style transfer, and dialog response generation. The source code to reproduce our experimental results is available on GitHub.

Highlights

  • (1) When replacing the Gaussian variational distributions with samplebased distributions in variational auto-encoder (VAE), we derive implicit VAE. (2) We further extend VAE to maximize mutual information between latent representations and observed sentences, leading to a variant termed as iVAEMI

  • (1) Acc: the accuracy of transferring sentences into another sentiment measured by an automatic classifier: the “fasttext” library (Joulin et al, 2017); (2) BLEU: the consistency between the transferred text and the original; (3) PPL: the reconstruction perplexity of original sentences without altering sentiment; (4) RPPL: Input: adversarially regularized autoencoder (ARAE): iVAEMI : it was super dry and had a weird taste to the entire slice . it was super nice and the owner was super sweet and helpful . it was super tasty and a good size with the best in the burgh

  • The reverse perplexity that evaluates the training corpus based on language model derived from the generated text, which measures the extent to which the generations are representative of the training corpus; (5) Flu: human evaluated index on the fluency of transferred sentences when read alone (1-5, 5 being most fluent as natural language); (6) Sim: the human evaluated similarity between the original and the transferred sentences in terms of their contents (1-5, 5 being most similar)

Read more

Summary

Introduction

Deep latent variable models (LVM) such as variational auto-encoder (VAE) (Kingma and Welling, 2013; Rezende et al, 2014) are successfully applied for many natural language processing tasks, including language modeling (Bowman et al, 2015; Miao et al, 2016), dialogue response generation (Zhao et al, 2017b), controllable text generation (Hu et al, 2017) and neural machine translation (Shah and Barber, 2018) etc. One advantage of VAEs is the flexible distribution-based latent representation. The second reason is the so-called posterior collapse issue, which occurs when learning VAEs with an auto-regressive decoder (Bowman et al, 2015). It produces undesirable outcomes: the encoder yields meaningless posteriors that are very close to the prior, while the decoder tends to ignore the latent codes in generation (Bowman et al, 2015). Several attempts have been made to alleviate this issue (Bowman et al, 2015; Higgins et al, 2017; Zhao et al, 2017a; Fu et al, 2019; He et al, 2019)

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.