Abstract

Graphics processing units (GPUs) are increasingly critical for general-purpose parallel processing performance. GPU hardware is composed of many streaming multiprocessors, each of which employs the single-instruction multiple-data (SIMD) execution style. This massively parallel architecture allows GPUs to execute tens of thousands of threads in parallel. Thus, GPU architectures efficiently execute heavily data-parallel applications. However, due to this SIMD execution style, resource utilization and thus overall performance can be significantly affected if computation threads must take diverging control paths. Control flow divergence in GPUs is a well-known problem: prior approaches have attempted to reduce control flow divergence through code transformations, memory access indirection, and input data reorganization. However, as we will demonstrate, the utility of these transformations is seriously affected by the lack of a guiding metric that properly estimates how control flow divergence affects application performance. In this paper, we introduce a metric that simply and accurately estimates performance of computation-bound GPU kernels with control flow divergence, and use the metric as a value function for thread re-grouping algorithms. We measure the performance on NVIDIA GTS250 GPU. For the tested set of applications, our experiments demonstrate that the proposed metric correlates well with actual GPU application performance. Through thread re-grouping guided by our metric, control flow divergence optimization can improve application performance by up to 3.19X.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call