Abstract

Topic modeling is a powerful technique for unsupervised analysis of large document collections. Topic models conceive latent topics in text using hidden random variables, and discover that structure with posterior inference. Topic models have a wide range of applications like tag recommendation, text categorization, keyword extraction and similarity search in the broad fields of text mining, information retrieval, statistical language modeling. In this work, a dataset with 200 abstracts fall under four topics are collected from two different domain journals for tagging journal abstracts. The document models are built using LDA (Latent Dirichlet Allocation) with Collapsed Variational Bayes and Gibbs sampling. Then the built model is used to extract appropriate tags for abstracts. The performance of the built models are analyzed by the evaluation measure perplexity and observed that Gibbs sampling outperforms CV B0 sampling. Tags extracted by two algorithms remains almost the same.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call