Abstract

Dynamic Voltage and Frequency Scaling (DVFS) is widely used in today's mobile devices. Commonly adopted OS level DVFS policies usually operate with high sampling and adjustment frequency in order to compensate for the lack of information from applications. In this paper, we consider the application of video decoding and propose a method that enables DVFS at a coarse time granularity by taking advantage of application level semantics. Using machine learning, we build a prediction model for the choice of CPU frequency based on the estimated workload and the progress of execution. We show that with our method, the system is able to perform DVFS with a frequency of only around 1 Hz, abiding by the practical constraints imposed by existing operating systems, but still saves an average of 40.1% in energy consumption compared with the execution at nominal speed. Further, we argue that the law of diminishing returns applies if we increase the frequency at which DVFS operates, and thus it may be costly and impractical to perform application level DVFS at a very fine time granularity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.