Abstract

Topic modeling has been widely used to mine topics from documents. However, a key disadvantage of topic modeling is that it requires large amounts of data (e.g. thousands of documents) to provide reliable statistics to generate coherent topics. However, in practice, many document collections do not have as many documents. Given a small number of documents, the topic generated by the classic topic model LDA is very poor. Even in the case of a large amount of data, unsupervised learning of the topic model will still produce unsatisfactory results. In recent years, knowledge-based topic models have been proposed which require human users to provide some prior domain knowledge to guide the model to generate better topics. Our research takes a completely different approach. We recommend learn like humans, i.e. retain the results learned in the past and use them to help future learning. When faced with a new task, we first mine some reliable (transcendental) knowledge from past learning/modeling results, and then use it to guide model inference to produce more coherent topics. This approach is possible because there is readily available big data on the Web. The algorithm mines two forms of knowledge: the chain must be (Meaning two words should be in the same topic) and cannot be linked (meaning two words should not be in the same topic). Two issues of automatic knowledge mining are discussed, namely the problem of erroneous knowledge and knowledge transferability. Use from the experimental results of the review documents of 100 product areas show that the method proposed in this paper has a significant improvement over the state-of-the-art baseline.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call