Abstract

Bayesian text classifiers face a common issue which is referred to as data sparsity problem, especially when the size of training data is very small. The frequently used Laplacian smoothing and corpus-based background smoothing are not effective in handling it. Instead, we propose a novel semantic smoothing method to address the sparse problem. Our method extracts explicit topic signatures (e.g. words, multiword phrases, and ontology-based concepts) from a document and then statistically maps them into single-word features. We conduct comprehensive experiments on three testing collections (OHSUMED, LATimes, and 20NG) to compare semantic smoothing with other approaches. When the size of training documents is small, the bayesian classifier with semantic smoothing not only outperforms the classifiers with background smoothing and Laplacian smoothing, but also beats the state-of-the-art active learning classifiers and SVM classifiers. In this paper, we also compare three types of topic signatures with respect to their effectiveness and efficiency for semantic smoothing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call