Abstract

The current trend of Dynamic Voltage and Frequency Scaling (DVFS) techniques involve algorithms that predict when a processor is in a period of accessing off chip memory and dial down its voltage/frequency during this phase in order to reduce energy consumption with minimal, if any, effect on execution time. These algorithms often operate with a parameter that defines the tolerable performance degradation, because the various operating frequencies that a processor can be set to are often limited. This limit makes it practically impossible to dial down a processor's frequency to the exact optimal frequency that will provide maximal energy efficiency but not affect performance. This leads to a need for these algorithms to include the previously stated parameter to identify cases where choices which degrade performance to an unacceptable level and/or without providing a benefit in energy consumption are avoided. However, the overhead costs incurred by the process of voltage and frequency scaling must also be taken into consideration. We propose a study to determine the impact of these overhead costs on the overall benefit of dynamic voltage and frequency scaling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call