Abstract

This paper concerns the approximation of probability measures on R^d with respect to the Kullback-Leibler divergence. Given an admissible target measure, we show the existence of the best approximation, with respect to this divergence, from certain sets of Gaussian measures and Gaussian mixtures. The asymptotic behavior of such best approximations is then studied in the small parameter limit where the measure concentrates; this asympotic behavior is characterized using Γ-convergence. The theory developed is then applied to understand the frequentist consistency of Bayesian inverse problems in finite dimensions. For a fixed realization of additive observational noise, we show the asymptotic normality of the posterior measure in the small noise limit. Taking into account the randomness of the noise, we prove a Bernstein-Von Mises type result for the posterior measure.

Highlights

  • In this paper, we study the “best” approximation of a general finite dimensional probability measure, which could be non-Gaussian, from a set of simple probability measures, such as a single Gaussian measure or a Gaussian mixture family

  • For a fixed realization of noise η = η, by applying the theory developed in the previous section, we obtain we show the a Bernstein–Von Mises (BvM) type asymptotic normality theorem for μηN with for μηN respect in to the small noise limit. both limit processes, small noise and large data

  • We have studied a methodology widely used in applications, yet little analyzed, namely, the approximation of a given target measure by a Gaussian, or by a Gaussian mixture

Read more

Summary

Introduction

We study the “best” approximation of a general finite dimensional probability measure, which could be non-Gaussian, from a set of simple probability measures, such as a single Gaussian measure or a Gaussian mixture family. The existence of minimizers follows from the fact that the Kullback–Leibler divergence has compact sublevel sets and the closedness of A with respect to weak convergence of probability measures; see, e.g., [23, Corollary 2.2]. The bound |rε| ≤ Cε in Lemma 3.7 will be used to prove the rate of convergence for the posterior measures that arise from Bayesian inverse problems; see Theorem 5.4 and its proof. To the discussion in Remark 3.9, the residual in (48) is here demonstrated to be of order o(1), but the quantitative bound that |rε| ≤ Cε in (37) can be used to extract a rate of convergence This can be used to study the limiting behavior of posterior measures arising from Bayesian inverse problems when multiple modes are present; see the section. We assume that the data is generated from the truth x† and a single realization of the Gaussian noise η†, i.e., y η†

ZNη exp
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.