Abstract

This study reports a Graphics Processing Unit (GPU)-based parallelization of the Distinct Lattice Spring Model (DLSM) for geomechanics simulation. The DLSM is a newly developed numerical model for rock dynamics problems, i.e., dynamic failure and wave propagation. Despite its applicability, one of the drawbacks of this model is the high computational load for practical simulations. To tackle this problem, a GPU with a Compute Unified Device Architecture (CUDA) is adopted to parallelize the DLSM code. The performance of the GPU DLSM code is tested on two computers equipped with modern GPU cards. The results show that significant performance improvements are gained from GPU parallelization of the DLSM code (the maximum speed up achieved was 23×).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call