Abstract

In this paper, we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by a directed acyclic graph. Traditionally, heuristic algorithms have been widely used for many resource scheduling domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating state-of-the-art technique across a broad range of heterogeneous resource scheduling domains over many years. Despite their long-standing popularity, HEFT-like algorithms are known to be vulnerable to a small amount of noise added to the environment. Our Deep Reinforcement Learning (DRL)-based SoC Scheduler (DeepSoCS), capable of learning the “best” task ordering under dynamic environment changes, overcomes the brittleness of rule-based schedulers such as HEFT with significantly higher performance across different types of jobs. We describe a DeepSoCS design process using a real-time heterogeneous SoC scheduling emulator, discuss major challenges, and present two novel neural network design features that lead to outperforming HEFT: (i) hierarchical job- and task-graph embedding; and (ii) efficient use of real-time task information in the state space. Furthermore, we introduce effective techniques to address two fundamental challenges present in our environment: delayed consequences and joint actions. Through an extensive simulation study, we show that our DeepSoCS exhibits the significantly higher performance of job execution time than that of HEFT with a higher level of robustness under realistic noise conditions. We conclude with a discussion of the potential improvements for our DeepSoCS neural scheduler.

Highlights

  • Task scheduling is a universal problem that affects many aspects of our lives, including wireless communication systems, supply chain logistics, device placement, computer processors, supercomputing, and cloud computing, to name a few

  • We have investigated a Deep Reinforcement Learning (DRL) based approach that is capable of learning to schedule a multitude of jobs without significant design changes while simultaneously addressing the inherently high brittleness of rule-based schedulers with higher system-wide performance

  • We present a novel neural network algorithm DeepSoCS that learns to make the extremely resource-efficient task ordering actions in the high-fidelity environment

Read more

Summary

Introduction

Task scheduling is a universal problem that affects many aspects of our lives, including wireless communication systems, supply chain logistics, device placement, computer processors, supercomputing, and cloud computing, to name a few. Any algorithm achieving higher resource-efficient task/job execution without creating an additional system penalty can bring huge benefits, lower costs, or both, to many industries. Heuristic-based list scheduling algorithms are widely used in a multitude of heterogeneous task and resource scheduling problems, where they heuristically search relative importance in presented task nodes and schedule the task on the rank basis. Heterogeneous Earliest Finish Time (HEFT) is a general list. HEFT and its derivative Predict Earliest Finish Time (PEFT) [3] are primary benchmarks to compare against. To this date, these algorithms both generate competitive scheduling decisions in the context of minimizing total application execution time [4]

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call