Abstract

Hadoop and Apache Spark have become popular frameworks for distributed big data processing. This research aims to configure Hadoop and Spark for conducting training and testing on big data using distributed machine learning methods with MLlib, including linear regression and multi-linear regression. Additionally, an external library, LSTM, is used for experimentation. The experiments utilize three desktop devices to represent a series of tests on single and multi-node networks. Three datasets, namely bitcoin (3,613,767 rows), gold-price (5,585 rows), and housing-price (23,613 rows), are employed as case studies. The distributed computation tests are conducted by allocating uniform core processors on all three devices and measuring execution times, as well as RMSE and MAPE values. The results of the single-node tests using MLlib (both linear and multi-linear regression) with variations of core utilization ranging from 2 to 16 cores, show that the overall dataset performs optimally using 12 cores, with an execution time of 532.328 seconds. However, in the LSTM method, core allocation variations do not yield significant results and require longer program execution times. On the other hand, in the multinode (2) tests, optimal performance is achieved using 8 cores, with an execution time of 924.711 seconds, while in the multi-node (3) tests, the ideal configuration is 6 cores with an execution time of 881.495 seconds. In conclusion, without the involvement of HDFS, distributed MLlib programs cannot be processed, and core allocation depends on the number of nodes used and the size of the dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call