Abstract

GPUs have emerged as popular throughput computing platforms due to the massively parallel computing capability and low cost. To attain further performance enhancement beyond single GPU, there is a growing interest in exploiting systems with multiple GPUs. Attaining superior performance in a multi-GPU system involves three main design challenges, namely load balance, memory utilization, and data transfer. Imbalanced loading across a system could cause idling of GPUs while poor data reuse would trigger excessive memory accesses. The inefficient data transfer between a host and a device becomes a considerable performance overhead during high throughput computing. This paper aims at addressing the above design issues by proposing a Computation and Communication Aware task graph Scheduling (CCAS) for multi-GPU systems. The proposed scheduling approach (CCAS) adopts an effective heuristic algorithm that considers both data reuse and load balance in a multi-GPU system. The data transfer overhead is hidden by extensively overlapping computation and data communication. The experimental results of the proposed CCAS have demonstrated an average of 22.15% performance enhancement when compared with a previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call