Abstract

Task scheduling scenarios require the system designers to have complete information about the resources and their capabilities, along with the tasks and their application-specific requirements. An effective task-to-resource mapping strategy will maximize resource utilization under constraints, while minimizing the task waiting time, which will in-turn maximize the task execution efficiency. In this work, a two-level reinforcement learning algorithm for task scheduling is proposed. The algorithm utilizes a deep-intensive learning stage to generate a deployable strategy for task-to-resource mapping. This mapping is re-evaluated at specific execution breakpoints, and the strategy is re-evaluated based on the incremental learning from these breakpoints. In order to perform incremental learning, real-time parametric checking is done on the resources and the tasks; and a new strategy is devised during execution. The mean task waiting time is reduced by 20% when compared with standard algorithms like Dynamic and Integrated Resource Scheduling, Improved Differential Evolution, and Q-learning-based Improved Differential Evolution; while the resource utilization is improved by more than 15%. The algorithm is evaluated on datasets from different domains like Coronavirus disease (COVID-19) datasets of public domain, National Aeronautics and Space Administration (NASA) datasets and others. The proposed method performs consistently on all the datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.