Abstract
K-means, along with its several other variants, is the most widely used family of partitional clustering algorithms. Generally speaking, this family of algorithm starts by initializing a number of data points as cluster centres, and then iteratively refines these cluster centres based on the current partition of the dataset. Given a set of cluster centres, inducing the partition over the dataset involves finding the nearest (or most similar) cluster centre for each data point, which is an O(NK) operation, N and K being the number of data points and the number of clusters, respectively. In our proposed approach, we avoid the explicit computation of these distances for the case of sparse vectors, e.g. documents, by utilizing a fundamental operation, namely TOP(x), which gives a list of the top most similar vectors with respect to the vector x. A standard way to store sparse vectors and retrieve the top most similar ones given a query vector, is with the help of the inverted list data structure. In our proposed method, we use the TOP(x) function to first select cluster centres that are likely to be dissimilar to each other. Secondly, to obtain the partition during each iteration of K-means, we avoid the explicit computation of the pair-wise similarities between the centroid and the non-centroid vectors. Thirdly, we avoid recomputation of the cluster centroids by adopting a centrality based heuristic. We demonstrate the effectiveness of our proposed algorithm on TREC-2011 Microblog dataset, a large collection of about 14M tweets. Our experiments demonstrate that our proposed method is about 35x faster and produces more effective clusters in comparison to the standard K-means algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.