Abstract

In this work we present an implementation of accelerating the calculation of neutral gas flow in a single-null DEMO divertor configuration on a graphics processing unit (GPU), using the DIVGAS (divertor gas simulator) code. For comparison purposes, various types of GPUs will be used, which include pure GPUs for scientific calculations as well as GPUs for gaming purposes. The computation accuracy of the DIVGAS code on GPUs has been validated with the corresponding CPU-based benchmark case. To evaluate the performance gains, the computing time on each GPU against its sequential CPU counterpart has been compared. The measured speedups show that the GPU can accelerate the execution of the DIVGAS code by a factor of 60. The speedup of the DIVGAS code scales linearly with the corresponding double precision peak performance of the GPU as well as the GPU memory bandwidth. The parallelization approach presented here significantly reduces the cost of DIVGAS simulations and has the potential to scale to large CPU/GPU clusters, which could enable future applications, which focus on even more complex 3D neutral flow problems. The accelerated version of the DIVGAS code on GPUs is considered to be a major breakthrough in the reduction of the needed computational time for fusion related applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.