Abstract

The Bayesian inference approach is widely used to solve inverse problems due to its versatile and natural ability to handle ill-posedness. However, there are often challenges when dealing with situations involving continuous fields or parameters with large-resolution discrete representations. Furthermore, the prior distribution of the unknown parameters is also commonly difficult to determine. Therefore, in this study, an operator learning-based generative adversarial network (OL-GAN) is proposed and integrated into the Bayesian inference framework to address these issues. Compared with classical Bayesian approaches, the distinctive characteristic of the proposed method is that it learns the joint distributions of parameters and responses. By using the trained generative model to handle the prior in Bayes’ rule, the posteriors of the unknown parameters can theoretically be approximated by any sampling algorithms (e.g., Markov chain Monte Carlo, MCMC) under the proposed framework. More importantly, efficient sampling can be implemented in a low-dimensional latent space shared by the components of the joint distribution. The latent space is typically a simple and easy-to-sample distribution (e.g., Gaussian, uniform), which significantly reduces the computational cost associated with the Bayesian inference while avoiding prior selection. Furthermore, the generator is resolution-independent due to the incorporation of operator learning. Predictions can thus be obtained at desired coordinates, and inversions can be performed even if the observation data are misaligned with the training data. Finally, the effectiveness of the proposed method is validated through several numerical experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call