Abstract
AbstractShort text clustering is one of the fundamental tasks in natural language processing. Different from traditional documents, short texts are ambiguous and sparse due to their short form and the lack of recurrence in word usage from one text to another, making it very challenging to apply conventional machine learning algorithms directly. In this article, we propose two novel approaches for short texts clustering: collapsed Gibbs sampling infinite generalized Dirichlet multinomial mixture model infinite GSGDMM) and collapsed Gibbs sampling infinite Beta‐Liouville multinomial mixture model (infinite GSBLMM). We adopt two flexible and practical priors to the multinomial distribution where in the first one the generalized Dirichlet distribution is integrated, while the second one is based on the Beta‐Liouville distribution. We evaluate the proposed approaches on two famous benchmark datasets, namely, Google News and Tweet. The experimental results demonstrate the effectiveness of our models compared to basic approaches that use Dirichlet priors. We further propose to improve the performance of our methods with an online clustering procedure. We also evaluate the performance of our methods for the outlier detection task, in which we achieve accurate results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.