Abstract

Data centers afford huge energy costs. Saving energy while providing efficient quality of service (Qos) is the goal pursued by data centers. But this is a challenging issue. To ensure the Qos of latency-critical applications, data centers always schedule processors to run at higher frequencies. The continuous high frequency operation will cause great energy waste. Modern processors are equipped with dynamic voltage and frequency scaling (DVFS) technology, which allows the processor to run at every frequency levels it supports, so we focus on how to use DVFS to trade-off between energy and Qos. In this paper, we propose a two-stage strategy based on DVFS to dynamically scaling the CPU frequency during latency-critical workload execution, aimed at minimizing the energy consumption for latency-critical workload which is under the Qos constraint. The two-stage strategy includes a static stage and dynamic stage, which are worked together to determine the optimal frequency for running workload. The static stage uses a well designed heuristic algorithm to determine the frequency-load matches under Qos constraint, while the dynamic stage leverages a threshold method to determine whether to adjust the pre-set frequency. We evaluate the two-stage strategy in terms of Qos and energy saving on the cloudsuite benchmark, and compares the two metrics with the-state-of art Ondemand. Results show that our strategy is superior to Ondemand for energy saving, improving more than 13%.

Highlights

  • Cloud computing technology and big data technology have promoted the vigorous development of the Internet

  • We propose a two-stage strategy based on Dynamic Voltage and Frequency Scaling (DVFS) technique, which aims to reduce energy consumption and guarantee performance for the latency-critical workload

  • The proposed two-stage strategy dynamically schedules the frequency based on DVFS technique, so we compare the two-stage strategy with Ondemand strategy

Read more

Summary

Introduction

Cloud computing technology and big data technology have promoted the vigorous development of the Internet. Building a data center costs millions of dollars, of which the energy cost accounting for a significant proportion of total investment [9], [22]. The power consumption of these data centers is 416.2 terawatt hours, accounting for 2% of the global total power consumption [30] These data centers produce more than 43 million carbon dioxide annually [2]. Ondemand strategy is the default one in the Linux system It schedules the CPU frequency between the highest and lowest according to the CPU utilization. Ondemand automatically scheduling the CPU frequency according to the system load It is not flexible because it does not use other frequency levels and limits the more space to save energy

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.