Abstract

Full waveform inversion (FWI) of 3-D data sets has recently been possible thanks to the development of high performance computing. However, FWI remains a computationally intensive task when high frequencies are injected in the inversion or more complex wave physics (viscoelastic) is accounted for. The highest computational cost results from the numerical solution of the wave equation for each seismic source. To reduce the computational burden, one well-known technique is to employ a random linear combination of the sources, rather that using each source independently. This technique, known as source encoding, has shown to successfully reduce the computational cost when applied to real data. Up to now, the inversion is normally carried out using gradient descent algorithms. With the idea of achieving a fast and robust frequency-domain FWI, we assess the performance of the random source encoding method when it is interfaced with second-order optimization methods (quasi-Newton l-BFGS, truncated Newton). Because of the additional seismic modelings required to compute the Newton descent direction, it is not clear beforehand if truncated Newton methods can indeed further reduce the computational cost compared to gradient algorithms. We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method. We perform experiment on synthetic and real data sets. In both cases, we confirm that combining source encoding with second-order optimization methods reduces the computational cost compared to the case where source encoding is interfaced with gradient descent algorithms. For the synthetic data set, inspired from the geology of Gulf of Mexico, we show that the quasi-Newton l-BFGS algorithm requires the lowest computational cost. For the real data set application on the Valhall data, we show that the truncated Newton methods provide the most robust direction of descent.

Highlights

  • Full waveform inversion (FWI) is a non-linear ill-posed inverse problem which aims to reconstruct the Earth’s parameters such as, for instance, P- and S-wave velocities, density, attenuation or anisotropic parameters, by fitting seismic data recorded near the surface or at the sea bottom (Lailly 1983; Tarantola 1984; Virieux & Operto 2009)

  • We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method

  • We have shown that the truncated Newton methods have the highest convergence rate, limited memory BFGS (l-BFGS) has the lowest computational cost, followed closely by GN, whether source encoding is used or not

Read more

Summary

INTRODUCTION

Full waveform inversion (FWI) is a non-linear ill-posed inverse problem which aims to reconstruct the Earth’s parameters such as, for instance, P- and S-wave velocities, density, attenuation or anisotropic parameters, by fitting seismic data recorded near the surface or at the sea bottom (Lailly 1983; Tarantola 1984; Virieux & Operto 2009). In standard FWI, limited memory BFGS (l-BFGS) has shown to improve the convergence (Brossier et al 2009) This quasi-Newton method approximates the inverse of the Hessian by performing successive rank-2 updates of an initial estimation from the gradients and the models of the previous l iterations (Byrd et al 1995). We need to assess whether the higher computational cost of one non-linear iteration of the truncated Newton methods can be balanced by an improved convergence rate provided by a more accurate estimation of the Hessian. We compare the convergence and the computational efficiency of the above-mentioned optimization methods [non-linear conjugate gradient (nl-CG), l-BFGS, Gauss–Newton (GN) and full Newton (FN)] when they are implemented in efficient frequencydomain FWI with and without random source encoding. In Appendix B, we illustrate that when starting from an inaccurate model, source encoding can help to guide the inversion toward an improved minimum of the misfit function, thanks to a broader exploration of the model space

METHOD
Second-order optimization methods
Pre-conditioner
Tikhonov regularization
Random source encoding
Optimization algorithms with source encoding
NUMERICAL EXAMPLES
Experimental protocol
Synthetic example
Synthetic data without noise
Real data example
CONCLUSION
Convergence rate
Computational efficiency and speed-up
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.