Abstract
This research introduces an accelerated training approach for Vanilla Physics-Informed Neural Networks (PINNs) that addresses three factors affecting the loss function: the initial weight state of the neural network, the ratio of domain to boundary points, and the loss weighting factor. The proposed method involves two phases. In the initial phase, a unique loss function is created using a subset of boundary conditions and partial differential equation terms. Furthermore, we introduce preprocessing procedures that aim to decrease the variance during initialization and choose domain points according to the initial weight state of various neural networks. The second phase resembles Vanilla-PINN training, but a portion of the random weights are substituted with weights from the first phase. This implies that the neural network’s structure is designed to prioritize the boundary conditions, subsequently affecting the overall convergence. The study evaluates the method using three benchmarks: two-dimensional flow over a cylinder, an inverse problem of inlet velocity determination, and the Burger equation. Incorporating weights generated in the first training phase neutralizes imbalance effects. Notably, the proposed approach outperforms Vanilla-PINN in terms of speed, convergence likelihood and eliminates the need for hyperparameter tuning to balance the loss function.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have