Abstract
With the rapid development of the performance of computer systems, energy consumption has been increasing dramatically as well. Energy efficiency has been paid much attention, especially in large-scale distributed computing systems. We propose an effective approach for energy reduction, by dynamic speed scaling and task scheduling simultaneously. Markov models for distributed computing systems are proposed, and detailed analyses of the models are provided. Markov decision processes (MDP) are applied for problem formulation, and MDP algorithms for obtaining the optimal solutions are introduced. The efficacy of our approach is further validated by simulation experiments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have