Abstract

Multi-GPUs appear as an attractive platform to speed up data-parallel GPGPU computation. The idea of split-and-merge execution has been introduced to accelerate the parallelism of multiple GPUs even further. However, it has not been explored before how to exploit such an idea for real-time multi-GPU systems properly. This paper presents an open-source real-time multi-GPU scheduling framework, called GPU-SAM, that transparently splits each GPGPU application into smaller computation units and executes them in parallel across multiple GPUs, aiming to satisfy real-time constraints. Multi-GPU split-and-merge execution offers the potential for reducing an overall execution time but at the same time brings various different influences on the schedulability of individual applications. Thereby, we analyze the benefit and cost of split-and-merge execution on multiple GPUs and derive schedulability analysis capturing seemingly conflicting influences. We also propose a GPU parallelism assignment policy that determines the multi-GPU mode of each application from the perspective of system-wide schedulability. Our experiment results show that GPU-SAM is able to improve schedulability in real-time multi-GPU systems by relaxing the restriction of launching a kernel on a single GPU only and choosing better multi-GPU execution modes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.