Abstract

This paper studies the delay-optimal virtual machine (VM) scheduling problem in cloud computing systems, which have a constant amount of infrastructure resources, such as CPU, memory, and storage in the resource pool. The cloud computing system provides VMs as services to users. Cloud users request various types of VMs randomly over time, and the requested VM-hosting durations vary vastly. We first adopt a queuing model for the heterogeneous and dynamic workloads. Then, we formulate the VM scheduling in such a queuing cloud computing system as a decision-making process, where the decision variable is the vector of VM configurations and the optimization objective is the delay performance in terms of average job completion time. A low-complexity online scheme that combines the shortest-job-first (SJF) buffering and min–min best fit (MMBF) scheduling algorithms, i.e., SJF-MMBF, is proposed to determine the solutions. Another scheme that combines the SJF buffering and reinforcement learning (RL)-based scheduling algorithms, i.e., SJF-RL, is further proposed to avoid the potential of job starvation in SJF-MMBF. The simulation results show that SJF-RL achieves its goal of delay-optimal scheduling of VMs by provisioning a low delay at various job arrival rates for various shapes of job length distributions. The simulation results also illustrate that although SJF-MMBF is sub-delay-optimal in a heavy-loaded and highly dynamic environment, it is efficient in throughput performance in terms of the average job hosting rate provisioning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call