Abstract

In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[nx][ny][nh][nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

Highlights

  • Seismic exploration is an important area of geophysical research, which aims at determining subsurface structures to detect where oil and gas can be found and recovered

  • Prestack Kirchhoff time migration (PSTM) is one of the most popular migration techniques used for seismic data processing because of its simplicity, efficiency, feasibility, and target-orientated properties (Bevc 1997)

  • Shi et al (2011) proposed a method for accelerating PSTM on graphics processor units (GPUs) by splitting the PSTM procedure into four consequence kernels according to the GPU memory limitations, as well as considering the floating point error problem, which may lead to differences when comparing PSTM with CPU computations

Read more

Summary

INTRODUCTION

Seismic exploration is an important area of geophysical research, which aims at determining subsurface structures to detect where oil and gas can be found and recovered. NVIDIA’s computed unified device architecture (CUDA) provides a C-like programming model for exploiting the massively parallel processing power of NVIDIA’s GPU (NVIDIA 2013), and it is employed widely for many parallel computation applications (Lu et al 2013, Capuzzo-Dolcetta and Spera 2013, Westphal et al 2014). Some studies have used NVIDIA GPUs to accelerate PSTM. Liu et al (2009) discussed the possibility of parallel computation with NVIDIA GPUs. Shi et al (2011) proposed a method for accelerating PSTM on GPUs by splitting the PSTM procedure into four consequence kernels according to the GPU memory limitations, as well as considering the floating point error problem, which may lead to differences when comparing PSTM with CPU computations. Our test results demonstrate that the proposed method achieves a great speedup

REVIEW OF PSTM
Hardware and real seismic field data
Profiling the PSTM CPU code
MULTI-GPU SCHEME FOR PSTM
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call