Hadoop is an open source implementation of the MapReduce Framework in the realm of distributed processing. A Hadoop cluster is a unique type of computational cluster designed for storing and analyzing large datasets across cluster of workstations. To handle massive scale data, Hadoop exploits the Hadoop Distributed File System termed as HDFS. The HDFS similar to most distributed file systems share a familiar problem on data sharing and availability among compute nodes, often which leads to decrease in performance. This paper is an experimental evaluation of Hadoop's computing performance which is made by designing a rack aware cluster that utilizes the Hadoop’s default block placement policy to improve data availability. Additionally, an adaptive data replication scheme that relies on access count prediction using Langrange’s interpolation is adapted to fit the scenario. To prove, experiments were conducted on a rack aware cluster setup which significantly reduced the task completion time, but once the volume of the data being processed increases there is a considerable cutback in computational speeds due to update cost. Further the threshold level for balance between the update cost and replication factor is identified and presented graphically.
Read full abstract