Abstract
Virtual machines (VMs) are a mature technology widely used worldwide during the last decades. VMs allow to reduce acquisition costs of data centers as well as reduce the cost of operating such computing facilities, mainly regarding electricity costs. However, although VMs are a well-established technology, they do not efficiently address yet the usage of CUDA-compatible GPUs (Graphics Processing Units) for computation purposes, which are commonly used in order to reduce the execution time of applications. The main concern of the way VMs use GPUs is that these devices cannot be concurrently shared among VMs and, therefore, the flexibility provided by VMs is not extended to GPUs.In this paper we propose to use the rCUDA remote GPU virtualization middleware in order to efficiently share GPUs among VMs. Our experiments show that sharing GPUs among VMs is beneficial in terms of overall throughput while increasing individual execution time of applications by a small percentage. Additionally, different levels of overhead can be decided in order to provide customers different qualities of service, which would cost a different fee. On the other hand, in addition to an increase in overall throughput, total energy consumption is decreased.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.