Abstract

Abstract We prove rigorous bounds on the errors resulting from the approximation of the incompressible Navier–Stokes equations with (extended) physics-informed neural networks. We show that the underlying partial differential equation residual can be made arbitrarily small for tanh neural networks with two hidden layers. Moreover, the total error can be estimated in terms of the training error, network size and number of quadrature points. The theory is illustrated with numerical experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call