Abstract

The calculation of pairwise correlation coefficient on a dataset, known as the correlation matrix, is often used in data analysis, signal processing, pattern recognition, image processing, and bioinformatics. With the state-of-the-art Graphic Processing Units (GPUs) that consist of massive cores capable to do processing up to several Gflops, the calculation of correlation matrix can be accelerated several times over traditional CPUs. However, due to the rapid growth of the data in the digital era, the correlation matrix calculation becomes computing intensive which needs to be executed on multiple GPUs. As of now, GPUs are common components in data center at many institutions. Their GPU deployment tends towards a GPU cluster which each node is equipped with GPUs. In this paper, we propose a parallel computing based on the hybrid MPI/CUDA programming for fast and efficient Pearson correlation matrix calculation on GPU clusters. At coarse grain parallelism, the correlation matrix is partitioned into tiles which are distributed to execute concurrently on many GPUs using MPI. At fine grain level, the CUDA kernel function on each node performs massively parallel computing on a GPU. To balance load across all GPUs, we adopt the work pool model which there is a master node that manages tasks in the work pool and dynamically assign tasks to worker nodes. The result of the evaluation shows that the proposed work can ensure the load balance across different GPUs and thus gives better execution time than using a simple static data partitioning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call