Abstract

Crown scheduling is a static scheduling approach for sets of parallelizable tasks with a common deadline, aiming to minimize energy consumption on parallel processors with frequency scaling. We demonstrate that crown schedules are robust, i. e. that the runtime prolongation of one task by a moderate percentage does not cause a deadline transgression by the same fraction. In addition, by speeding up some tasks scheduled after the prolonged task, the deadline can still be met at a moderate additional energy consumption. We present a heuristic to perform this re-scaling online and explore the tradeoff between additional energy consumption in normal execution and limitation of deadline transgression in delay cases. We evaluate our approach with scheduling experiments on synthetic and application task sets. Finally, we consider influence of heterogeneous platforms such as ARM’s big.LITTLE on robustness.

Highlights

  • Static scheduling of parallelizable task sets on parallel machines has been investigated for decades, and the advent of frequency scaling has led to scheduling approaches that e. g. try to minimize energy consumption for a given throughput, i. e. deadline until which each task must be executed

  • We have investigated the robustness and elasticity of crown schedules, i. e. static schedules for parallelizable tasks on parallel machines with frequency scaling, given a deadline and minimizing energy consumption

  • We demonstrated with synthetic benchmark tasksets that a runtime increase of one task by a fraction α leads to a makespan exceeding the deadline by a fraction 0.319α on average and 0.779α at maximum

Read more

Summary

Introduction

Static scheduling of parallelizable task sets on parallel machines has been investigated for decades, and the advent of frequency scaling has led to scheduling approaches that e. g. try to minimize energy consumption for a given throughput, i. e. deadline until which each task must be executed.The static schedulers assume that the workload of each task is known exactly, but small variations might occur. We consider a task set with two tasks of similar workload scheduled onto two cores until the deadline. The tasks could remain sequential and be executed one on each core until the deadline (with suitable frequency), or the tasks could be parallelized, whereupon we assume perfect speedup here for simplicity of presentation. While the energy consumption is the same for both schedules, the robustness is different, cf Fig. 1: In the first schedule, the deadline is surpassed by time αM. Our power consumption model only takes the frequency as a parameter, but not voltage, temperature nor instruction mix (on which it depends). We assume that for each frequency, the least possible voltage level is used, that temperature is controlled by cooling, and that the tasks’ instruction mixes are sufficiently similar. The power consumption and the range of frequencies of a core depend on its type

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call