Abstract
Featured by high portability and programmability, Dynamic Voltage and Frequency Scaling (DVFS) has been widely employed to achieve energy efficiency for high performance applications on distributed-memory architectures nowadays through various scheduling algorithms. Generally, different forms of slack from load imbalance, network latency, communication delay, memory and disk access stalls, etc. are exploited as energy saving opportunities where peak CPU performance is not necessary, with little or limited performance loss. The deployment of DVFS for communication intensive applications is straightforward due to the explicit boundary between Energy Saving Blocks (ESBs) at source code level, while for data (e.g., memory and disk access) intensive applications it is difficult for applying DVFS since ESB boundary is implicit due to mixed types of workloads. We propose an adaptively aggressive DVFS scheduling strategy to achieve energy efficiency for data intensive applications, and further save energy via speculation to mitigate DVFS overhead for imbalanced branches. We implemented and evaluated our approach using five memory and disk access intensive benchmarks with imbalanced branches against another two energy saving approaches. The experimental results indicate an average of 32.6% energy savings were achieved with 6.2% average performance loss compared to the original executions on a power-aware 64-core cluster.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have