Abstract

In our recently proposed Integrated Finite Element Neural Network (I-FENN) framework (Pantidis and Mobasher, 2023) we showcased how PINNs can be deployed on a finite element-level basis to swiftly approximate a state variable of interest, and we applied it in the context of non-local gradient-enhanced damage mechanics. In this paper, we enhance the rigor and performance of I-FENN by focusing on two crucial aspects of its PINN component: (a) the error convergence analysis and (b) the hyperparameter-performance relationship. Guided by the available theoretical formulations in the field, we introduce a systematic numerical approach based on a novel set of holistic performance metrics to answer both objectives. In the first objective, we explore in detail the convergence of the PINN training error and the global error against the network size and the training sample size. We demonstrate a consistent converging behavior of the two error types for any investigated combination of network complexity, dataset size and choice of hyperparameters, which empirically proves the conformance of the PINN setup and implementation to the available convergence theories. In the second objective, we establish an a-priori knowledge of the hyperparameters which favor higher predictive accuracy, lower computational effort, and the least chances of arriving at trivial solutions. The analysis leads to several outcomes that contribute to the better performance of I-FENN, and fills a long-standing gap in the PINN literature with regards to the numerical convergence of the network errors while accounting for commonly used optimizers (Adam and L-BFGS). The proposed analysis method can be directly extended to other ML applications in science and engineering. The code and data utilized in the analysis are posted publicly to aid the reproduction and extension of this research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call