Abstract

Physics-Informed Neural Networks (PINNs) represent a groundbreaking approach wherein neural networks (NNs) integrate model equations, such as Partial Differential Equations (PDEs), within their architecture. This innovation has become instrumental in solving diverse problem sets including PDEs, fractional equations, integral-differential equations, and stochastic PDEs. It's a versatile multi-task learning framework that tasks NNs with fitting observed data while simultaneously minimizing PDE residuals. This paper delves into the landscape of PINNs, aiming to delineate their inherent strengths and weaknesses. Beyond exploring the fundamental characteristics of these networks, this review endeavors to encompass a wider spectrum of collocation-based physics-informed neural networks, extending beyond the core PINN model. Variants like physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN) constitute pivotal aspects of this exploration. The study accentuates a predominant focus in research on tailoring PINNs through diverse strategies: adapting activation functions, refining gradient optimization techniques, innovating neural network structures, and enhancing loss function architectures. Despite the extensive applicability demonstrated by PINNs, surpassing classical numerical methods like Finite Element Method (FEM) in certain contexts, the review highlights ongoing opportunities for advancement. Notably, there are persisting theoretical challenges that demand resolution, ensuring the continued evolution and refinement of this revolutionary approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call