In clouds and data centers, GPU servers with multiple GPUs are widely deployed. Current state-of-the-art GPU scheduling policies are “static” in assigning applications to different GPUs. These policies usually ignore the dynamics of the GPU utilization and are often inaccurate in estimating resource demand before assigning/running applications, so there is a large opportunity to further balance the loads and improve GPU utilization. Based on CUDA (Compute Unified Device Architecture), we develop a runtime system called DCUDA which supports <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">“dynamic”</i> scheduling of running applications between multiple GPUs. In particular, DCUDA takes into consideration multidimensional resources, including computing cores, memory usage, and energy consumption. It first provides a real-time and lightweight method to accurately monitor the resource demand of applications and GPU utilization. Furthermore, it provides a universal migration facility to migrate <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">“running applications”</i> between GPUs with negligible overhead. More importantly, DCUDA transparently supports all CUDA applications without changing their source code. Experiments with our prototype system show that DCUDA can reduce 78.3% of overloaded time of GPUs on average. As a result, for different workloads consisting of a wide range of applications we studied, DCUDA can reduce the average execution time of general applications by up to 42.1%, and even up to 67% for memory-intensive tasks. Besides, DCUDA also reduces 13.3% of energy in light-load scenarios.
Read full abstract