Abstract

We propose a novel generative model to explore both local and global context for joint learning topics and topic-specific word embeddings. In particular, we assume that global latent topics are shared across documents, a word is generated by a hidden semantic vector encoding its contextual semantic meaning, and its context words are generated conditional on both the hidden semantic vector and global latent topics. Topics are trained jointly with the word embeddings. The trained model maps words to topic-dependent embeddings, which naturally addresses the issue of word polysemy. Experimental results show that the proposed model outperforms the word-level embedding methods in both word similarity evaluation and word sense disambiguation. Furthermore, the model also extracts more coherent topics compared with existing neural topic models or other models for joint learning of topics and word embeddings. Finally, the model can be easily integrated with existing deep contextualized word embedding learning methods to further improve the performance of downstream tasks such as sentiment classification.

Highlights

  • Probabilistic topic models assume that words are generated from latent topics that can be inferred from word co-occurrence patterns taking a document as global context

  • We propose a neural generative model built on Variational Auto-Encoder (VAE), called the Joint Topic Wordembedding (JTW) model, for jointly learning topics and topic-specific word embeddings

  • We show that JTW can be integrated with deep contextualized word embeddings to further improve the performance of downstream tasks such as sentiment classification

Read more

Summary

Introduction

Probabilistic topic models assume that words are generated from latent topics that can be inferred from word co-occurrence patterns taking a document as global context. Tractable posterior distribution of observed words given latent topics (Miao et al, 2016; Srivastava and Sutton, 2017; Bouchacourt et al, 2018) These models take the bag-of-words (BOWs) representation of a given document as the input to the VAE and aim to learn hidden topics that can be used to reconstruct the original document. They do not learn word embeddings concurrently. The information derived from word embeddings can be used to promote semantically related words in the Polya Urn sampling process of topic models (Li et al, 2017) or generate topic hierarchies (Zhao et al, 2018). All these models use pretrained word embeddings and do not learn word embeddings jointly with topics

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.