Abstract

Hadoop is an open source implementation of the MapReduce Framework in the realm of distributed processing. A Hadoop cluster is a unique type of computational cluster designed for storing and analyzing large datasets across cluster of workstations. To handle massive scale data, Hadoop exploits the Hadoop Distributed File System termed as HDFS. The HDFS similar to most distributed file systems share a familiar problem on data sharing and availability among compute nodes, often which leads to decrease in performance. This paper is an experimental evaluation of Hadoop's computing performance which is made by designing a rack aware cluster that utilizes the Hadoop’s default block placement policy to improve data availability. Additionally, an adaptive data replication scheme that relies on access count prediction using Langrange’s interpolation is adapted to fit the scenario. To prove, experiments were conducted on a rack aware cluster setup which significantly reduced the task completion time, but once the volume of the data being processed increases there is a considerable cutback in computational speeds due to update cost. Further the threshold level for balance between the update cost and replication factor is identified and presented graphically.

Highlights

  • A distributed system is a pool of autonomous compute nodes [1] connected by swift networks that appear as a single workstation

  • Performance evaluations show that as replication levels increase the task completion time gets significantly reduced for computation involving no data files

  • With increasing replication factors there is some increase in the performance and when the replication level is increased by 3, its completion time is 337 s, and further reduces considerably to 8.12 s at replication level 8

Read more

Summary

Introduction

A distributed system is a pool of autonomous compute nodes [1] connected by swift networks that appear as a single workstation. In reality, solving complex problems involves division of problem into sub tasks and each of which is solved by one or more compute nodes which communicate with each other by message passing. The current inclination towards Big Data analytics has lead to such compute intensive tasks. Big Data, [2] is termed for a collection of data sets which are large and complex and difficult to process using traditional data processing tools. The need for Big Data management is to ensure high levels of data accessibility for business intelligence and big data analytics. This condition needs applications capable of distributed processing involving terabytes of information saved in a variety of file formats

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call