Abstract

Minimizing the energy consumption of embedded systems with real-time execution constraints is becoming more and more important. More functionalities and better performance/ cost tradeoffs are expected from such systems because of the increased use of real-time applications and the fact that batteries are becoming standard power supplies. Dynamically changing the speed of the processor is a common and efficient way to reduce energy consumption and remarkable gains can be obtained when considering cacheintensive and/or CPU-bound applications as the CPU energy consumption may dominate the overall energy consumption. In fact, this is the reason why modern processors are equipped with Dynamic Voltage and Frequency Scaling (DVFS) technology [7]. In the deterministic case where job sizes and arrival times are known, a vast literature addressed the problem of designing both off-line and on-line algorithms to compute speed profiles that minimize the energy consumption subject to hard real-time constraints (deadlines) on job execution times; e.g., [5]. In a stochastic environment where only statistical information is available about job sizes and arrival times, it turns out that combining hard deadlines and energy minimization via DVFS-based techniques is much more difficult. In fact, forcing hard deadlines requires to be very conservative, i.e., to consider the worst cases. Matter of fact, existing approaches work within a finite number of jobs [6, 3].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call