Abstract

Deep generative networks have achieved great success in high dimensional density approximation, especially for applications in natural images and language. In this paper, we investigate their approximation capability in capturing the posterior distribution in Bayesian inverse problems by learning a transport map. Because only the unnormalized density of the posterior is available, training methods that learn from posterior samples, such as variational autoencoders and generative adversarial networks, are not applicable in our setting. We propose a class of network training methods that can be combined with sample-based Bayesian inference algorithms, such as various MCMC algorithms, ensemble Kalman filter and Stein variational gradient descent. Our experiment results show the pros and cons of deep generative networks in Bayesian inverse problems. They also reveal the potential of our proposed methodology in capturing high dimensional probability distributions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.