Abstract

In this paper, we introduce a complete implementation for efficient and large-scale dissipative particle dynamics (DPD) simulation on Graphics Processing Unit (GPU). The implementation is designed and optimized according to the nature of DPD simulation technique and also takes fully advantage of the computational power of GPUs. From studies of benchmarks, we show that the GPU-based implementation can predict the results correctly and provide nearly 60 times speedup over LAMMPS on a single Central Processing Unit (CPU) core. By using a novel divide-and-conquer (D&C) algorithm to reduce the memory requirement in simulation, our implementation has the capability to perform large-scale DPD simulations with some ten millions of particles on a single current GPU. Furthermore, the thermal fluctuation analysis of a superior large-scale lamellar system (11,059,200 particles) is presented as an important example of practical application of our implementation, and a scaling law at large wavelengths that is inaccessible to small simulation system, is observed. As a result, our GPU-based DPD implementation is very promising for studying various interesting phenomena, which often take place on the mesoscopic length and time scales and are not easily addressed by a conventional CPU-based implementation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call