Abstract

Migration techniques are an integral part of seismic imaging workflows. Least-squares reverse time migration (LSRTM) overcomes some of the shortcomings of conventional migration algorithms by compensating for illumination and removing sampling artifacts to increase spatial resolution. However, the computational cost associated with iterative LSRTM is high and convergence can be slow in complex media. We implement prestack LSRTM in a deep-learning framework and adopt strategies from the data science domain to accelerate convergence. Our hybrid framework leverages the existing physics-based models and machine-learning optimizers to achieve better and cheaper solutions. Using a time-domain formulation, we find that minibatch gradients can reduce the computation cost by using a subset of total shots for each iteration. The minibatch approach not only reduces source crosstalk, but it is also less memory-intensive. Combining minibatch gradients with deep-learning optimizers and loss functions can improve the efficiency of LSRTM. Deep-learning optimizers such as adaptive moment estimation are generally well-suited for noisy and sparse data. We compare different optimizers and determine their efficacy in mitigating migration artifacts. To accelerate the inversion, we adopt the regularized Huber loss function in conjunction. We apply these techniques to 2D Marmousi and 3D SEG/EAGE salt models and find improvements over conventional LSRTM baselines. Our approach achieves higher spatial resolution in less computation time measured by various qualitative and quantitative evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call