In recent years, physics-informed neural networks (PINNs) have attracted more and more attention for their ability to quickly obtain high-precision data-driven solutions with only a small amount of data. However, although this model has good results in some nonlinear problems, it still has some shortcomings. For example, the unbalanced back-propagation gradient calculation results in the intense oscillation of the gradient value during the model training, which is easy to lead to the instability of the prediction accuracy. Based on this, we propose a gradient-optimized physics-informed neural networks (GOPINNs) model in this paper, which proposes a new neural network structure and balances the interaction between different terms in the loss function during model training through gradient statistics, so as to make the new proposed network structure more robust to gradient fluctuations. In this paper, taking Camassa-Holm (CH) equation and DNLS equation as examples, GOPINNs is used to simulate the peakon solution of CH equation, the rational wave solution of DNLS equation and the rogue wave solution of DNLS equation. The numerical results show that the GOPINNs can effectively smooth the gradient of the loss function in the calculation process, and obtain a higher precision solution than the original PINNs. In conclusion, our work provides new insights for optimizing the learning performance of neural networks, and saves more than one third of the time in simulating the complex CH equation and the DNLS equation, and improves the prediction accuracy by nearly ten times.
Read full abstract