Abstract

With General Purpose programmable GPUs becoming more and more popular, automated tools are needed to bridge the gap between achievable performance from highly parallel architectures and the performance required in applications. In this paper, we concentrate on improving GPU memory management for applications with large and intermediate data sets that do not completely fit in GPU memory. For such applications, the movement of the extra data to CPU memory must be carefully managed. In particular, we focus on solving the joint task scheduling and data transfer scheduling problem posed in [1], and propose an algorithm that gives close to optimal results (as measured by running simulated annealing overnight) in terms of the amount of data transferred for image processing benchmarks such as edge detection and Convolutional Neural Networks. Our results enable a reduction of up to 30× in the amount of data transfers compared to an unoptimized implementation. They are up to 2× better than the methods previously proposed in [1] and less than 16% away from the optimal solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call