Abstract

We study a scheduling model with speed scaling for machines and the immediate start requirement for jobs. Speed scaling improves the system performance, but incurs the energy cost. The immediate start condition implies that each job should be started exactly at its release time. Such a condition is typical for modern Cloud computing systems with abundant resources. We consider two cost functions, one that represents the quality of service and the other that corresponds to the cost of running. We demonstrate that the basic scheduling model to minimize the aggregated cost function with n jobs is solvable in O(nlog n) time in the single-machine case and in O(n^{2}m) time in the case of m parallel machines. We also address additional features, e.g., the cost of job rejection or the cost of initiating a machine. In the case of a single machine, we present algorithms for minimizing one of the cost functions subject to an upper bound on the value of the other, as well as for finding a Pareto-optimal solution.

Highlights

  • We study scheduling models that address two important aspects of modern computing systems: machine speed scaling for time and energy optimization and the requirement to start jobs immediately at the time they are submitted to the system

  • The first aspect, speed scaling, has been the subject of intensive research since the 1990s, see Yao et al (1995), and has become important recently, with the increased attention to energy-saving demands, see surveys Albers (2009, 2010a), Jing et al (2013), Gerards et al (2016). It reflects the ability of modern computing systems to change their clock speeds through the technique known as Dynamic Voltage and Frequency Scaling (DVFS)

  • DVFS techniques have been successfully applied in Cloud data centers to reduce the energy usage, see, e.g., VonLaszewski et al (2009), Wu et al (2014), DoLago et al (2011)

Read more

Summary

Introduction

We study scheduling models that address two important aspects of modern computing systems: machine speed scaling for time and energy optimization and the requirement to start jobs immediately at the time they are submitted to the system. The second aspect, the immediate start condition, is motivated by the advancements of modern Cloud computing systems, and it is widely accepted by practitioners This feature is not typical for the traditional scheduling research dealing with scenarios arising from manufacturing. The immediate start requirement is not a seemingly strong assumption, but a fact of today’s life It is widely accepted in distributed computing, but generally overlooked by the scheduling community, where the traditional perception remains, that of limited resources and acceptable delayed starting times. In the remainder of this section, we provide a formal definition of the model under study and discuss the relevant literature

Definitions and notation
Related work
Problem 1 on a single machine
Problem 2 on a single machine
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call