Abstract
Geostatistical simulations have become a widely used tool for modeling of oil and gas reservoirs and the assessment of uncertainty. One important current issue is the development of high-resolution models in a reasonable computational time. A possible solution is based on taking advantage of parallel computational strategies. In this paper we present a new methodology that exploits the benefits of graphics processing units (GPUs) along with the master–slave architecture for geostatistical simulations that are based on random paths. The methodology is a hybrid method in which different levels of master and slave processors are used to distribute the computational grid points and to maximize the use of multiple processors utilized in GPU. It avoids conflicts between concurrently simulated grid points, an important issue in high-resolution and efficient simulations. For the sake of comparison, two distinct parallelization methods are implemented, one of which is specific to pattern-based simulations. To illustrate the efficiency of the method, the algorithm for the simulation of pattern is adapted with the GPU. Performance tests are carried out with three large grid sizes. The results are compared with those obtained based on simulations with central processing units (CPU). The comparison indicates that the use of GPUs reduces the computation time by a factor of 26–85.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.