Abstract

The moving particle semi-implicit (MPS) method performs well in simulating incompressible flows with free surfaces. Despite its applicability, the MPS method suffers from the fundamental instability problem and high computational cost in its practical application. Substantial research has been conducted on improving the stability and accuracy of the MPS method. Moreover, graphics processing units (GPUs), which are multi-processors that execute many three-dimensional geometric processes at high speed, provide unprecedented capability for scientific computations. However, the usage of a single GPU card is not sufficient for engineering applications that require several million particles that predict the desired physical processes, because the available memory space is insufficient. In this work, the dynamic stability (DS) algorithm and particle shifting (PS) algorithm have been used to overcome the instability and inaccuracies caused by tensile instability and non-uniform particle distribution, respectively. Based on the stable MPS method, a GPU-based MPS code that uses the compute unified device architecture (CUDA) language has been developed. An efficient neighborhood particle search is performed using an indirect method, and the matrix for the pressure Poisson equation (PPE) is assembled in parallel. Based on the single-GPU version, a multi-GPU MPS code has been developed. The approach uses a non-geometric dynamic domain decomposition method that provides homogeneous load balancing whereby different portions (subdomains) of the physical system under study are assigned to different GPUs. Communication between devices is achieved with the use of a message passing interface (MPI). Based on the neighborhood particle search, the techniques for building and updating the “halo” are described in detail. The speed-up of the single-GPU version is analyzed for different numbers of particles, and the scalability of the multi-GPU version is analyzed for different numbers of particles and different numbers of GPUs. Last, an application with more than 107 particles is presented to show the capability of the code in handling large-scale simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.