Abstract

As the power/energy consumption is one of the major contributors to the Total Cost of Ownership (TCO), improving power/energy efficiency is crucial for large-scale data centers where latency-critical applications are commonly accommodated while computing resources are usually under-utilized. For improving the power/energy efficiency of processors, most of the commercial processors support Dynamic Voltage and Frequency Scaling (DVFS) technology that enables to adjust Voltage and Frequency state (V/F state) of the processor dynamically. In particular, for the latency-critical applications, many prior studies propose power management policies using the DVFS for the latency-critical applications, which minimizes the performance degradation or satisfies the Service Level Objectives (SLOs) constraints. Meanwhile, although the interrupt rate also affects the response latency and energy efficiency of latency-critical applications considerably, those prior studies just introduce policies for V/F state adjustment while not considering the interrupt rate. Therefore, in this article, we investigate the impact of adjusting the interrupt rate on the tail response latency and energy consumption. Through our experimental results, we observe that adjusting interrupt rate along with V/F state management varies the performance and energy consumption considerably, and provides an opportunity to reduce energy further by showing latency overlap between different V/F states. Based on the observation, we show the quantitative potential in improving energy efficiency of co-adjusting V/F state and interrupt rate with a simple management policy, called Co-PI. Co-PI searches the most energy-efficient combination of the V/F state and interrupt rate from the latency and energy tables that we obtain through offline profiling, and reflect the combination to the core and NIC. Co-PI reduces energy consumption by 34.1% and 25.1% compared with performance and ondemand governors while showing the almost same tail response latency with the performance governor that operates cores at the highest V/F state statically.

Highlights

  • In large-scale data centers, power/energy efficiency is essential since it mostly contributes to the Total Cost of Ownership (TCO)

  • To evaluate Co-PI, we use the P95 as the target latency when cores operate at the P0 statically with the default Interrupt Throttle Register (ITR) management policy offered by the Intel Network Interface Cards (NICs) driver

  • In this paper, we show that the impact of adjusting interrupt rate on performance and energy consumption of latencycritical application

Read more

Summary

INTRODUCTION

In large-scale data centers, power/energy efficiency is essential since it mostly contributes to the Total Cost of Ownership (TCO). Many prior studies propose power management policies exploiting the DVFS technology for latency-critical applications to improve energy efficiency while minimizing performance degradation or not violating SLO constraints [10]–[14]. Since the interrupt rate shows the considerable impact on the response latency of latency-critical applications, adjusting the interrupt rate provides an opportunity to decrease the V/F state of cores without performance degradation, improving energy efficiency. If the application still shows the same tail response latency after decreasing the V/F state of the processor by adjusting the interrupt rate, we can reduce the energy consumption without performance degradation. In this paper, we experimentally analyze the impact of the interrupt management at each V/F state of the processor on the response latency and energy consumption with two representative latency-critical applications, memcached [18] and nginx [19].

BACKGROUND
EXPERIMENTAL METHODOLOGY
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call