Abstract

Physics-informed neural networks (PINNs) have been widely applied in different fields due to their effectiveness in solving partial differential equations (PDEs). However, the accuracy and efficiency of PINNs need to be considerably improved for scientific and commercial purposes. To address this issue, we systematically propose a novel dimension-augmented physics-informed neural network (DaPINN), which simultaneously and significantly improves the accuracy and efficiency of the base PINN. In the DaPINN model, we manipulate the dimensionality of the network input by inserting additional sample features and then incorporate the expanded dimensionality into the loss function. Moreover, we verify the effectiveness of power series augmentation, Fourier series augmentation and replica augmentation in both forward and backward problems. In most experiments, the error of DaPINN is 1 ∼2 orders of magnitude lower than that of the base PINN. The results show that the DaPINN outperforms the original PINN in terms of both accuracy and efficiency with a reduced dependence on the number of sample points. We also discuss the computational complexity of the DaPINN, its network size implications, other implementations of the DaPINN and the compatibility of DaPINN's methods with residual-based adaptive refinement (RAR), self-adaptive physics-informed neural networks (SA-PINNs) and gradient-enhanced physics-informed neural networks (gPINNs).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.