Abstract

Simulating radiative heat transfer in a graded-index (GRIN) medium is particularly challenging because of curve ray propagation trajectories. As an effective method, the Monte Carlo method is easy to implement with high precision. However, the Monte Carlo method is time consuming, and the computing time increased substantially when combined with the Runge-Kutta ray tracing technique to obtain the ray trajectories in the GRIN medium. Because the Monte Carlo method is ideally suited for parallel processing architectures and acceleration with graphics processing units (GPUs), we have developed a fast GPU Monte Carlo implementation for radiative heat transfer in GRIN media. The performance of the GPU implementation has been improved by combining the ray tracing process with the binary search and optimizing the code based on the architecture of GPUs. In particular, the utilization of the GPU hardware has been maximized, and the warp inactivity has been substantially reduced. Two- and three-dimensional GRIN medium models were evaluated to assess the accuracy and performance of the GPU implementations. Compared with the equivalent central processing unit (CPU) implementations, the GPU implementations provided in this paper show a great capability for producing physically accurate results with substantial speedups. The speedup of the GPU implementation on a single GPU for the two-dimensional case reaches 43.13 × against the equivalent CPU implementation using a single CPU core and 5.65 × against the equivalent CPU implementation using 6 CPU cores (12 threads). The speedup of the GPU implementation on a single GPU for the three-dimensional case reaches 35.61 × against the equivalent CPU implementation using a single CPU core and 2.07 × against the equivalent CPU implementation using 14 CPU cores (28 threads).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call