Abstract

In this paper, we introduce a complete implementation for efficient and large-scale dissipative particle dynamics (DPD) simulation on Graphics Processing Unit (GPU). The implementation is designed and optimized according to the nature of DPD simulation technique and also takes fully advantage of the computational power of GPUs. From studies of benchmarks, we show that the GPU-based implementation can predict the results correctly and provide nearly 60 times speedup over LAMMPS on a single Central Processing Unit (CPU) core. By using a novel divide-and-conquer (D&C) algorithm to reduce the memory requirement in simulation, our implementation has the capability to perform large-scale DPD simulations with some ten millions of particles on a single current GPU. Furthermore, the thermal fluctuation analysis of a superior large-scale lamellar system (11,059,200 particles) is presented as an important example of practical application of our implementation, and a scaling law at large wavelengths that is inaccessible to small simulation system, is observed. As a result, our GPU-based DPD implementation is very promising for studying various interesting phenomena, which often take place on the mesoscopic length and time scales and are not easily addressed by a conventional CPU-based implementation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.