Abstract

We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10–30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Program summaryProgram title: GPU-accelerated DPD Package for LAMMPSCatalogue identifier: AETN_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: GNU General Public License, version 3No. of lines in distributed program, including test data, etc.: 1602716No. of bytes in distributed program, including test data, etc.: 26489166Distribution format: tar.gzProgramming language: C/C++, CUDA C/C++, MPI.Computer: Any computers having nVidia GPGPUs with compute capability 3.0.Operating system: Linux.Has the code been vectorized or parallelized?: Yes. Number of processors used: 1024 16-core CPUs and 1024 GPUsRAM: 500 Mbytes host memory, 2 Gbytes device memorySupplementary material: The data for the examples discussed in the manuscript is available for download.Classification: 6.5, 12, 16.1, 16.11.Nature of problem:Particle-based simulation of mesoscale systems involving nano/micro-fluids, polymers and spontaneous self-assembly process.Solution method:The system is approximated by a number of coarse-grained particles interacting through pairwise potentials and bonded potentials. Classical mechanics is assumed following Newton’s laws. The evolution of the system is integrated using a time-stepping scheme such as Velocity-Verlet.Restrictions:The code runs only on CUDA GPGPUs with compute capability 3.0.Unusual features:Fully implemented on GPGPUs with significant speedup.Running time:78 h using 1024 GPGPUs for simulating a 128-million-particle system for 18.4 million time steps.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.