Abstract
Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the -divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering.
Highlights
Throughout this paper we view both target and proposal as given and we focus on investigating the required sample size for accurate importance sampling with bounded test functions, following a similar perspective as [1,7,8]
The main goal of this paper is to provide a rich and unified understanding of the use of importance sampling to approximate the Bayesian update, while keeping the presentation accessible to a large audience
In this subsection we study importance sampling in high dimensional limits
Summary
Bayesian formulations have the potential to provide uncertainty quantification by computing several posterior quantiles This motivates considering a worst-case error analysis [6] of importance sampling over large classes of test functions φ or, equivalently, bounding a certain distance between the random particle approximation measure μ N and the target μ, see [1]. This formula allows us to investigate the scaling of the χ2 -divergence (and thereby the rate at which the sample size needs to grow) in several singular limit regimes, including small observation noise, large prior covariance and large dimension.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have