Abstract

Our recent study has found that physics-informed neural networks (PINN) tend to be local approximators after training. This observation led to the development of a novel physics-informed radial basis network (PIRBN), which is capable of maintaining the local approximating property throughout the entire training process. Unlike deep neural networks, a PIRBN comprises only one hidden layer and a radial basis “activation” function. Under appropriate conditions, we demonstrated that the training of PIRBNs using gradient descendent methods can converge to Gaussian processes. Besides, we studied the training dynamics of PIRBN via the neural tangent kernel (NTK) theory. In addition, comprehensive investigations regarding the initialisation strategies of PIRBN were conducted. Numerical examples demonstrated that PIRBN is more effective than PINN in solving nonlinear partial differential equations with high-frequency features and ill-posed computational domains. Moreover, the existing PINN numerical techniques, such as adaptive learning, decomposition and different types of loss functions, are applicable to PIRBN. The programs that can regenerate all numerical results are available at https://github.com/JinshuaiBai/PIRBN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call