Abstract

Least-squares reverse time migration (LSRTM) is a seismic imaging method that can provide higher-resolution image of the subsurface structures compared to other methods. However, LSRTM is computationally expensive. To reduce the computational time of LSRTM, GPU can be utilized. This leads to the objective of this work which is to develop a GPU implementation of LSRTM. In this work, the two-dimensional first-order acoustic wave equations are solved using the second-order finite difference on a staggered grid and a perfectly matched layer is used as an absorbing boundary condition. The adjoint-state method is used to compute the gradient of the objective function. A linear conjugate gradient method is used to minimize the objective function. Both forward- and backward-propagation of wavefields using the finite-difference method are performed on a single GPU using the NVIDIA CUDA library. For a verification purpose, the GPU program of LSRTM was applied to a synthetic data set generated from the Marmousi model. Numerical results show that LSRTM can provide an image with a higher resolution of subsurface structure compared to a conventional RTM image. For a computational cost issue, the GPU-version of LSRTM is significantly faster than the serial CPU-version of LSRTM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call