Abstract
Cluster analysis is the prominent data mining technique in knowledge discovery and it discovers the hidden patterns from the data. The K-Means, K-Modes and K-Prototypes are partition based clustering algorithms and these algorithms select the initial centroids randomly. Because of its random selection of initial centroids, these algorithms provide the local optima in solutions. To solve these issues, the strategy of Crow Search algorithm is employed with these algorithms to obtain the global optimum solution. With the advances in information technology, the size of data increased in a drastic manner from terabytes to petabytes. To make proposed algorithms suitable to handle these voluminous data, the phenomena of parallel implementation of these clustering algorithms with Hadoop Mapreduce framework. The proposed algorithms are experimented with large scale data and the results are compared in terms of cluster evaluation measures and computation time with the number of nodes.
Highlights
Clustering is the unsupervised classification technique that extracts useful knowledge from the data without knowing their class labels
The silhouette values obtained from various iterations for the Parallel CSAKMeans algorithm show that the proposed clustering algorithm outperforms than Parallel K-Means and Parallel PSOK-Means for all data sets
It is observed that the results of the Silhouette, F-Measure, Rand Index and Purity reveal that the Parallel CSAK-Means are higher than the Parallel K-Means and Parallel PSOK-Means clustering algorithms
Summary
Clustering is the unsupervised classification technique that extracts useful knowledge from the data without knowing their class labels. The K-Means, K-Modes and K-Prototypes are partition based clustering algorithms and these algorithms handle the numeric, categorical and mixing of numeric and categorical data objects respectively. K-Means is one of the most widely used partitional clustering algorithms to handle numerical data This algorithm is extended to handle the categorical, mixed numeric and categorical types of data. These algorithms are called as K-Modes and K-Prototypes (Huang, 1998, 1997). The authors suggested that each optimization algorithm has its own parameters and it is tedious to fix optimum values for these parameters This algorithm can be extended to automatically determine the optimal number of clusters for datasets
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Cognitive Informatics and Natural Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.