Abstract
Lifelong topic model (LTM), an emerging paradigm for never-ending topic learning, aims to yield higher-quality topics as time passes through knowledge accumulated from the past yet learned for the future. In this paper, we propose a novel lifelong topic model based on non-negative matrix factorization (NMF), called Affinity Regularized NMF for LTM (NMF-LTM), which to our best knowledge is distinctive from the popular LDA-based LTMs. NMF-LTM achieves lifelong learning by introducing word-word graph Laplacian as semantic affinity regularization. Other priors such as sparsity, diversity, and between-class affinity are incorporated as well for better performance, and a theoretical guarantee is provided for the algorithmic convergence to a local minimum. Extensive experiments on various public corpora demonstrate the effectiveness of NMF-LTM, particularly its human-like behaviors in two carefully designed learning tasks and the ability in topic modeling of big data. A further exploration of semantic relatedness in knowledge graphs and a case study on a large-scale real-world corpus exhibit the strength of NMF-LTM in discovering high-quality topics in an efficient and robust way.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Knowledge and Data Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.