Abstract

It is ubiquitous that multiple jobs coexist on the same machine, because tens or hundreds of cores are able to reside on the same chip. To run multiple jobs efficiently, the schedulers should provide flexible scheduling logic. Besides, corunning jobs may compete for the shared resources, which may lead to performance degradation. While many scheduling algorithms have been proposed for supporting different scheduling logic schemes and alleviating this contention, job coscheduling without performance degradation on the same machine remains a challenging problem. In this paper, we propose a novel adaptive deadlock-free scheduler, which provides flexible scheduling logic schemes and adopts optimistic lock control mechanism to coordinate resource competition among corunning jobs. This scheduler exposes all underlying resource information to corunning jobs and gives them necessary utensils to make use of that information to compete resource in a free-for-all manner. To further relieve performance degradation of coscheduling, this scheduler enables the automated control over the number of active utensils when frequent conflict becomes the performance bottleneck. We justify our adaptive deadlock-free scheduling and present simulation results for synthetic and real-world workloads, in which we compare our proposed scheduler with two prevalent schedulers. It indicates that our proposed approach outperforms the compared schedulers in scheduling efficiency and scalability. Our results also manifest that the adaptive deadlock-free control facilitates significant improvements on the parallelism of node-level scheduling and the performance for workloads.

Highlights

  • In recent years, the great rapid growth of sensors devices, high-speed search engines, and social networks has produced huge amount of data

  • The adjustment provides a trade-off between adaptability and stability; (ii) We evaluate our scheduling via simulation using synthetic and real-world workloads and compare it with multiple-path monolithic and two-level schedulers; (iii) We show that our approach comprehensively outperforms these other common schedulers and the simulation results show that the adaptive deadlock-free scheduling performing well in most experimental scenarios

  • Our results indicate that the deadlock-free approach can scale to tens of user-level schedulers and to challenging workloads, which will be further discussed 5.2

Read more

Summary

Introduction

The great rapid growth of sensors devices, high-speed search engines, and social networks has produced huge amount of data. Because the value of these applications is determined by the data quantity and the speed of producing results, a number of Data-Intensive Scalable Computing (DISC) [2] systems have been developed These DISC systems are enabling IT solutions for different fields because of their great potential in reducing the operating expenses and management overheads; i.e., DISC systems provide a shared elastic computing infrastructure to accommodate multiple applications. Some researchers advocate that two-level schedulers will be a good choice to support flexible scheduling logic schemes This type of schedulers has a central coordinator (first-level scheduler) to decide how many underlying resources can be distributed or offered to multiple parallel user-level schedulers (i.e., second-level scheduler that is able to implement distinct scheduling logic independently), as in Tessellation [16, 18] and Akaros [19].

Related Work
Adaptive Deadlock-Free Scheduling
Experiment Setups and Parameter Assumptions
Experiment Result Comparison and Evaluation
Findings
Conclusion and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call