Abstract

Physics-informed neural network (PINN) is an emerging technique for solving partial differential equations (PDEs) of flow problems. Due to the advantage of low computational cost, the gradient descent algorithms coupled with the weighted objectives method are usually used to optimize loss functions in the PINN training. However, the interaction mechanisms between gradients of loss functions are not fully clarified, leading to poor performances in loss functions optimization. For this, an adaptive gradient descent algorithm (AGDA) is proposed based on the interaction mechanisms analyses and then validated by analytical PDEs and flow problems. First, the interaction mechanisms of loss functions gradients in the PINN training based on the traditional Adam optimizer are analyzed. The main factors responsible for the poor performances of the Adam optimizer are identified. Then, a new AGDA optimizer is developed for the PINN training by two modifications: (1) balancing the magnitude difference of loss functions gradients and (2) eliminating the gradient directions conflict. Finally, three types of PDEs (elliptic, hyperbolic, and parabolic) and four viscous incompressible flow problems are selected to validate the proposed algorithm. It is found that to reach the specified accuracy, the required training time of the AGDA optimizer is about 16%–90% of the Adam optimizer and 41%–64% of the PCGrad optimizer, and the demanded number of iterations is about 10%–68% of the Adam optimizer and 38%–77% of the PCGrad optimizer. Therefore, the PINN method coupled with the AGDA optimizer is a more efficient and robust technique for solving partial differential equations of flow problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call