Abstract

The conventional procedures of clustering algorithms are incapable of overcoming the difficulty of managing and analyzing the rapid growth of generated data from different sources. Using the concept of parallel clustering is one of the robust solutions to this problem. Apache Hadoop architecture is one of the assortment ecosystems that provide the capability to store and process the data in a distributed and parallel fashion. In this paper, a parallel model is designed to process the k-means clustering algorithm in the Apache Hadoop ecosystem by connecting three nodes, one is for server (name) nodes and the other two are for clients (data) nodes. The aim is to speed up the time of managing the massive scale of healthcare insurance dataset with the size of 11 GB and also using machine learning algorithms, which are provided by the Mahout Framework. The experimental results depict that the proposed model can efficiently process large datasets. The parallel k-means algorithm outperforms the sequential k-means algorithm based on the execution time of the algorithm, where the required time to execute a data size of 11 GB is around 1.847 hours using the parallel k-means algorithm, while it equals 68.567 hours using the sequential k-means algorithm. As a result, we deduce that when the nodes number in the parallel system increases, the computation time of the proposed algorithm decreases.

Highlights

  • Big data is a combination of large amount, substantial, and multiple formation data created from varied and separated data sources

  • The connection between the master and slave nodes is performed. 5.2 Comma Separated Values (CSV) to Vectors Stage Before processing the data in Mahout, the data must be uploaded to Hadoop file system (HDFS)

  • The experiments are based on the initial number of clusters and the run time for both parallel and sequential k-means clustering

Read more

Summary

Introduction

Big data is a combination of large amount, substantial, and multiple formation data created from varied and separated data sources. Researchers and scientists think that big data is one of the most important subjects in computer sciences nowadays [1]. Sites of social media, hospital annals, and several new foundations are beyond the phenomenon of big Data [2]. A data warehouse cannot deal with the whole dataset because of its vast size [3]. Conventional algorithms are incapable of dealing with such enormous amounts of data, so they are not efficient for analysing them [4]. The traditional k-means clustering algorithm [5, 6] is not sufficient to manipulate the massive amount of data. Hadoop and Map Reduce tools can be used for dealing with such data [7]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.