Abstract

Deep generative networks have achieved great success in high dimensional density approximation, especially for applications in natural images and language. In this paper, we investigate their approximation capability in capturing the posterior distribution in Bayesian inverse problems by learning a transport map. Because only the unnormalized density of the posterior is available, training methods that learn from posterior samples, such as variational autoencoders and generative adversarial networks, are not applicable in our setting. We propose a class of network training methods that can be combined with sample-based Bayesian inference algorithms, such as various MCMC algorithms, ensemble Kalman filter and Stein variational gradient descent. Our experiment results show the pros and cons of deep generative networks in Bayesian inverse problems. They also reveal the potential of our proposed methodology in capturing high dimensional probability distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call