Abstract

In this study, we investigate a real-time system where computationally intensive tasks are executed using cloud computing platforms in data centers. These data centers are designed to provide prompt responses to incoming demands. To achieve this objective, an efficient scheduling system is crucial for determining the assignment of jobs to processors and the optimal starting times for each job’s execution. In this paper, we propose a novel reinforcement learning algorithm that introduces a new state variable and utilizes a set of virtual bins to classify jobs based on their remaining processing times. Our objective is to minimize the total slowdown, which is defined as the sum of completion time ratios to job demand sizes. We conduct a performance evaluation by comparing our developed algorithm with commonly used and highly efficient dispatching rules found in existing literature. The computational results demonstrate that our proposed reinforcement learning approach outperforms other solution approaches available in the literature. Furthermore, we illustrate the generalization capability of our algorithm and its ability to achieve superior results compared to dispatching rules when applied to new test instances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call