Abstract

MapReduce is parallel computing paradigm for bigdata data processing in clusters and data centers. Map Reduce employment contains a group of jobs, every of that consists of multiple map tasks followed by multiple reduce tasks because of 1)that map task will solely execute reduce slots, and 2)the final execution constraints that map tasks are run before reduce tasks, totally different job execution orders and map/reduce slot configurations for a MapReduce employment have considerably different performance and system utilization. Two performance metrics are considered, i.e., makespan and total completion time. Here Two algorithms are used UAS job ordering algorithm and another is slot configuration algorthim.one first algorithm is used for job ordering optimization and second is for slot configuration optimization. UAS job ordering algorithm handles if two jobs have size. The algorithms produce results which are better than currently unoptimized Hadoop. our proposed algorithms shows 15∼70 better than unoptimized hadoop.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call