Abstract

Abstract In this study, novel physics-informed neural network (PINN) methods are proposed to allow efficient training with improved accuracy. The computation of differential operators required for loss evaluation at collocation points are conventionally obtained via automatic differentiation (AD). Such PINNs require large optimization iterations and are very sample intensive because they are prone to optimizing towards unphysical solutions without sufficient collocation points. To make PINN training sample efficient, the idea of using numerical differentiation, coupled with automatic differentiation, is employed to define the loss function. The proposed coupled-automatic-numerical differentiation scheme — labeled as can-PINN — strongly links the collocation points, thus enabling efficient training in sparse sample regimes. The superior performance of can-PINNs is demonstrated on several challenging PINN problems, including the rotational flow problem and the channel flow over a backward facing step problem. The results reveal that for the challenging problems, can-PINNs can always achieve very good accuracy while the conventional PINNs based on AD fail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call