Abstract
AbstractWe propose a novel method for sampling and optimization tasks based on a stochastic interacting particle system. We explain how this method can be used for the following two goals: (i) generating approximate samples from a given target distribution and (ii) optimizing a given objective function. The approach is derivative‐free and affine invariant, and is therefore well‐suited for solving inverse problems defined by complex forward models: (i) allows generation of samples from the Bayesian posterior and (ii) allows determination of the maximum a posteriori estimator. We investigate the properties of the proposed family of methods in terms of various parameter choices, both analytically and by means of numerical simulations. The analysis and numerical simulation establish that the method has potential for general purpose optimization tasks over Euclidean space; contraction properties of the algorithm are established under suitable conditions, and computational experiments demonstrate wide basins of attraction for various specific problems. The analysis and experiments also demonstrate the potential for the sampling methodology in regimes in which the target distribution is unimodal and close to Gaussian; indeed we prove that the method recovers a Laplace approximation to the measure in certain parametric regimes and provide numerical evidence that this Laplace approximation attracts a large set of initial conditions in a number of examples.
Highlights
1.1 BackgroundWe consider the inverse problem of finding θ from y where y = G(θ) + η. (1.1)Here y ∈ RK is the observation, θ ∈ Rd is the unknown parameter, G : Rd → RK is the forward model and η is the observational noise
These iterative ensemble Kalman methods are similar to sequential Monte Carlo (SMC) in that they seek to map the prior to the posterior in finite continuous time or in a finite number of steps
Letting α = exp(−∆t) and viewing θn as a discrete time approximation of a continuous time process θ(t) at time t = n∆t, we find that the ∆t → 0 continuous-time limit associated with these dynamics is the following McKean SDE:
Summary
Solving inverse problems in the Bayesian framework can be prohibitively expensive because of the need to characterize an entire probability distribution One approach to this is to seek the point of maximum posterior probability, the MAP point [38, 17], defined by θ∗ = argminθ f (θ). By the Bernstein–von Mises theorem (and its extensions) [64], the posterior is expected to be well approximated by a Gaussian density in the large data limit, if the parameter is identifiable in the infinite data setting; a Gaussian approximation is expected to be good if the forward map is close to linear For these reasons, use of the Laplace method [60] to obtain a Gaussian approximation of the posterior density is often viewed as a useful approach in many application domains. The focus of this paper is on developing consensus-based sampling of the posterior distribution for Bayesian inverse problems and, in particular, on the study of such methods in the context of Gaussian approximation of the posterior
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.