Abstract

K-means plays an important role in different fields of data mining. However, k-means often becomes sensitive due to its random seeds selecting. Motivated by this, this article proposes an optimized k-means clustering method, named k*-means, along with three optimization principles. First, we propose a hierarchical optimization principle initialized by k* seeds ([Formula: see text]) to reduce the risk of random seeds selecting, and then use the proposed “top- n nearest clusters merging” to merge the nearest clusters in each round until the number of clusters reaches at [Formula: see text]. Second, we propose an “optimized update principle” that leverages moved points updating incrementally instead of recalculating mean and [Formula: see text] of cluster in k-means iteration to minimize computation cost. Third, we propose a strategy named “cluster pruning strategy” to improve efficiency of k-means. This strategy omits the farther clusters to shrink the adjustable space in each iteration. Experiments performed on real UCI and synthetic datasets verify the efficiency and effectiveness of our proposed algorithm.

Highlights

  • Clustering is to partition the data into different clusters with respect to similarity measures and is one of the most important tasks in data analysis, such as pattern discovery, pattern recognition, data summary, and image processing.[1]

  • We propose three optimization principles to further cut down the CPU cost

  • We propose a novel optimized hierarchical clustering method incorporated with three optimization principles, namely ‘‘top-n nearest clusters merging,’’ ‘‘optimized update principle,’’ and ‘‘cluster pruning strategy’’ to achieve both effective and efficient clustering robustly

Read more

Summary

Introduction

Clustering is to partition the data into different clusters with respect to similarity measures and is one of the most important tasks in data analysis, such as pattern discovery, pattern recognition, data summary, and image processing.[1]. In k*-means, we first start k-means with kÃ(kÃ.k) initial centers that are selected randomly, and iteratively use the ‘‘top-n nearest clusters merging’’ to merge the closest clusters and further refine clusters by k-means until the total number of clusters reaches at k. In this process, we propose three optimization principles to minimize the CPU cost. (2) We add and elaborate the proof of the proposed principles and strategies These materials include ‘‘top-n nearest cluster merging’’ (Lemma 1), ‘‘optimized update principle’’ (Lemma 2 and Lemma 3), and ‘‘cluster pruning strategy’’ (Lemma 4). We find that we can obtain a better clustering result when n is set to 2

Related work
Experimental setup and methodologies
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.