Abstract

High-order spatial statistics have been widely used to describe the spatial phenomena in the field of geology science. Spatial statistics are subject to extremely heavy computational burden for large geostatistical models. To improve the computational efficiency, a parallel approach based on GPU (Graphics Processing Unit) is proposed for the calculation of high-order spatial statistics. The parallel scheme is achieved by utilizing a two-stage method to calculate the replicate of a moment for a given template simultaneously termed as the node-stage parallelism, and transform the spatial moments to cumulants for all lags of a template simultaneously termed as the template-stage parallelism. Also, a series of optimization strategies are proposed to take full advantage of the computational capabilities of GPUs, including the appropriate task allocation to the CUDA (Compute Unified Device Architecture) threads, proper organization of the GPU physical memory, and optimal improvement of the existed parallel routines. Tests are carried out on two training images to compare the performance of the GPU-based method with that of the serial implementation. Error analysis results indicate that the proposed parallel method can generate accurate cumulant maps, and the performance comparisons on various examples show that all the speedups for third-order, fourth-order and fifth-order cumulants calculation are over 17 times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call