Abstract

Recent years have seen rapid adoption of GPUs in various types of platforms because of the tremendous throughput powered by massive parallelism. However, as the computing power of GPU continues to grow at a rapid pace, it also becomes harder to utilize these additional resources effectively with the support of GPU sharing. In this work, we designed and implemented <i>Gemini</i> , a user-space runtime scheduling framework to enable fine-grained GPU allocation control with support for multi-tenancy and elastic allocation, which are critical for cloud and resource providers. Our key idea is to introduce the concept of <i>kernel burst</i> , which refers to a group of consecutive kernels launched together without being interrupted by synchronous events. Based on the characteristics of kernel burst, we proposed a low overhead <i>event-driven monitor</i> and a <i>dynamic time-sharing scheduler</i> to achieve our goals. Our experiment evaluations using five types of GPU applications show that Gemini enabled multi-tenant and elastic GPU allocation with less than 5% performance overhead. Furthermore, compared to static scheduling, Gemini achieved 20% <inline-formula><tex-math notation="LaTeX">$\sim$</tex-math></inline-formula> 30% performance improvement without requiring prior knowledge of applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call