Abstract
Topic models extract commonly occurring latent topics from textual data. Statistical models such as Latent Dirichlet Allocation do not produce dense topic embeddings readily integratable into neural architectures, whereas earlier neural topic models are yet to fully take advantage of the discrete nature of the topic space. To bridge this gap, we propose a novel neural topic model, Discrete-Variational-Inference-based Topic Model (DVITM), which learns dense topic embeddings homomorphic to word embeddings via discrete variational inference. The model also views words as mixtures of topics and digests embedded input text. Quantitative and qualitative evaluations empirically demonstrate the superior performance of DVITM compared to important baseline models. In the end, case studies on text generation from a discrete space and aspect-aware item recommendation are presented to further illustrate the power of our model in downstream tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Intelligent Systems and Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.