Abstract
Hadoop, as the most widely adopted open-source implementation of MapReduce framework, makes MapReduce widely accessible. However, it is currently limited by its default MapReduce scheduler. To achieve better performance, the scheduler should take into consideration nodes' computing power and system resources in heterogeneous environment. Further more, from job perspective, tasks' non-linear progress is also an important factor. Some research work has been carried out to enhance the performance of MapReduce, but they are not satisfactory in terms of considering characteristics of both nodes and jobs. To overcome this drawback, we propose a Self-Learning MapReduce Scheduler (SLM), which outperforms the existing schedulers in multi-job environment. Since competitions on system resources may make a task's progress unpredictable, SLM determines the progress of each job based on its own historical information. In particular, on the self-learning stage of a job, with the feedback information from the first few tasks, SLM calculates the task phase weights. With these phase weights, SLM can obtain more accurate execution time estimation, which is the most important condition to finding stragglers (slow tasks). Experimental results show that, SLM can effectively improve the accuracy of execution time estimation and straggler identification, leading to the rational utilization of resources and shortening jobs' execution time especially in multi-job environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.