Abstract

We consider a distributed Bayesian parameter inference problem where a networked set of agents collaboratively infer the posterior distribution of unknown parameters in a partial differential equation (PDE) based on their noisy measurements of the PDE solution. Given the unknown parameters residing in a known compact set, we assume that a physics-informed neural network (PINN) has already been trained as the prior model, which is valid for all possible parameters within the set. PINNs incorporate PDEs as training constraints for better generalization even with less training samples. We introduce a distributed Langevin Markov Chain Monte-Carlo algorithm that employs the trained PINN model and the agents’ noisy measurements to approximate the posterior distribution of the unknown parameters. We establish convergence properties of the algorithm and demonstrate the effectiveness of the proposed approach through numerical simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call