Abstract
ABSTRACTWe compare two Monte Carlo inversions that aim to solve some of the main problems of dispersion curve inversion: deriving reliable uncertainty appraisals, determining the optimal model parameterization and avoiding entrapment in local minima of the misfit function. The first method is a transdimensional Markov chain Monte Carlo that considers as unknowns the number of model parameters, that is the locations of layer boundaries together with the Vs and the Vp/Vs ratio of each layer. A reversible‐jump Markov chain Monte Carlo algorithm is used to sample the variable‐dimension model space, while the adoption of a parallel tempering strategy and of a delayed rejection updating scheme improves the efficiency of the probabilistic sampling. The second approach is a Hamiltonian Monte Carlo inversion that considers the Vs, the Vp/Vs ratio and the thickness of each layer as unknowns, whereas the best model parameterization (number of layer) is determined by applying standard statistical tools to the outcomes of different inversions running with different model dimensionalities. This work has a mainly didactic perspective and, for this reason, we focus on synthetic examples in which only the fundamental mode is inverted. We perform what we call semi‐analytical and seismic inversion tests on 1D subsurface models. In the first case, the dispersion curves are directly computed from the considered model making use of the Haskell–Thomson method, while in the second case they are extracted from synthetic shot gathers. To validate the inversion outcomes, we analyse the estimated posterior models and we also perform a sensitivity analysis in which we compute the model resolution matrices, posterior covariance matrices and correlation matrices from the ensembles of sampled models. Our tests demonstrate that major benefit of the transdimensional inversion is its capability of providing a parsimonious solution that automatically adjusts the model dimensionality. The downside of this approach is that many models must be sampled to guarantee accurate posterior uncertainty. Differently, less sampled models are required by the Hamiltonian Monte Carlo algorithm, but its limits are the computational effort related to the Jacobian computation, and the multiple inversion runs needed to determine the optimal model parameterization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.