Abstract

Text clustering is most commonly treated as a fully automated task without user supervision. However, we can improve clustering performance using supervision in the form of pairwise (must-link and cannot-link) constraints. This paper introduces a rigorous Bayesian framework for semi-supervised clustering which incorporates human supervision in the form of pairwise constraints both in the expectation step and maximization step of the EM algorithm. During the expectation step, we model the pairwise constraints as random variables, which enable us to capture the uncertainly in constraints in a principled manner. During the maximization step, we treat the constraint documents as prior information, and adjust the probability mass of model distribution to emphasize words occurring in constraint documents by using Bayesian regularization. Bayesian conjugate prior modeling makes the maximization step more efficient than gradient search methods in the traditional distance learning. Experimental results on several text datasets demonstrate significant advantages over existing algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call