Abstract

SUMMARY Markov chain Monte Carlo (MCMC) sampling of solutions to large-scale inverse problems is, by many, regarded as being unfeasible due to the large number of model parameters. This statement, however, is only true if arbitrary, local proposal distributions are used. If we instead use a global proposal, informed by the physics of the problem, we may dramatically improve the performance of MCMC and even solve highly nonlinear inverse problems with vast model spaces. We illustrate this by a seismic full-waveform inverse problem in the acoustic approximation, involving close to 106 parameters. The improved performance is mainly seen as a dramatic shortening of the burn-in time (the time used to reach at least local equilibrium), but also the algorithm’s ability to explore high-probability regions (through more accepted perturbations) is potentially better. The sampling distribution of the algorithm asymptotically converges to the posterior probability distribution, but as with all other inverse methods used to solve highly nonlinear inverse problems we have no guarantee that we have seen all high-probability solutions in a finite number of iterations. On the other hand, with the proposed method it is possible to sample more high-probability solutions in a shorter time, without sacrificing asymptotic convergence. This may be a practical advantage for problems with many parameters and computer-intensive forward calculations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.