Abstract

In recent years, graphics processing units (GPUs) have been adopted in many High-Performance Computing (HPC) systems due to their massive computational power and superior energy efficiency. And accelerating CPU-version computational code on heterogeneous clusters with multi-core CPUs and GPUs has attracted a lot of attention. One of the focus on heterogeneous computing is to efficiently take advantage of all computational resources, including both CPU and GPU available on a cluster. In this paper, a heterogeneous MPI + OpenMP/CUDA parallel algorithm for solving the 2D neutron transport equation with the method of characteristic (MOC) is implemented. In this algorithm, the spatial domain decomposition technique provides the coarse-grained parallelism with the MPI protocol while the fine-grained parallelism is exploited through OpenMP (in CPU calculated domain) and CUDA (in GPU calculated domain) based on the ray parallelization. In order to efficiently leverage the computing power of heterogeneous clusters, a dynamic workload assignment scheme is proposed, which is to distribute the workload based on the runtime performance of CPUs and GPUs in the cluster. Moreover, the strong scaling performance of the MPI + CUDA parallelization is studied through a performance analysis model which provides the detailed impact of the degradation in iteration scheme, the load imbalance issue, the data copy between CPUs and GPUs, and the MPI communication in the MPI + CUDA parallel algorithm. And the corresponding conclusion is still tenable for the MPI + OpenMP/CUDA parallelization. The C5G7 2D benchmark and an extended 2D whole-core problem are calculated with MPI + CUDA parallelization, MPI + OpenMP/CUDA parallelization, and the MPI parallelization for comparison. Numerical results demonstrate that the heterogeneous parallel algorithm maintains the desired accuracy. And the dynamic workload assignment scheme can provide the optimal workload assignment which ideally matches the experimental results. In addition, over 11% improvement is observed in MPI + OpenMP/CUDA parallelization compared against the MPI + CUDA parallelization. Moreover, the CPUs/GPUs heterogeneous clusters significantly outperform the CPUs clusters and one heterogeneous node shows basically five times faster than a CPUs node.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.