Abstract

Recently, a large number of convolutional neural network (CNN) inference services have emerged on high-performance Graphic Processing Units (GPUs). However, GPUs are high power consumption units, and the energy consumption increases sharply along with the deployment of deep learning tasks. Although previous studies have considered the latency Service-Level-Objective (SLO) of inference services, they fail to directly take account of the energy consumption. Our investigation shows that coordinating batching and dynamic voltage frequency scaling (DVFS) settings can decrease the energy consumption of CNN inference. But it is affected by (i) larger configuration spaces; (ii) GPUs’ underutilization while data are transferred between CPUs and GPUs; (iii) fluctuating workloads. In this paper, we propose EAIS, an energy-aware adaptive scheduling framework that is comprised of a performance model, an asynchronous execution strategy, and an energy-aware scheduler. The performance model provides valid information about the performance characteristics of CNN inference services to shrink the feasible configuration space. The asynchronous execution strategy overlaps data upload and GPU execution to improve the system processing capacity. The energy-aware scheduler adaptively coordinates batching and DVFS according to fluctuating workloads to minimize energy consumption while meeting latency SLO. Our experimental results on NVIDIA Tesla M40 and V100 GPUs show that, compared to the state-of-the-art methods, EAIS decreases the energy consumption by up to 28.02% and improves the system processing capacity by up to 7.22% while meeting latency SLO. Besides, EAIS has been proved to have good versatility under different latency SLO constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call