Abstract

As a new approach for solving partial differential equations, physics-informed neural networks (PINNs) have received extensive attention in recent years and have made breakthroughs in various fields. Serving as a neural network-based universal solver, they possess advantages such as high continuity and being meshless. However, an increasing number of studies have pointed out that PINNs may fail to converge to accurate results when dealing with complex tasks. In this paper, we highlight the differences between training PINNs and regular neural network tasks, summarizing three key characteristics of training PINNs. Based on these findings, we propose a strategy of task decomposition and progressive learning, which successfully explains the effectiveness of existing methods for time-dependent problems and reveals their limitations. For time-dependent and time-independent problems, we partition the tasks into physically complete and progressively challenging sub-tasks. The division is based on the resolution requirements for time-independent problems and the coverage of the time domain for time-dependent problems. Through task parameters and the decomposition of the loss terms, we adaptively control the progressive increase in task complexity during the training process. The proposed method overcomes the limitations of existing approaches, incurs no additional computational cost, and is applicable to both time-dependent and time-independent problems, demonstrating higher generality. Furthermore, building upon the concepts of task decomposition and progressive learning, we introduce a stable and efficient minibatch training method for the first time. In a series of numerical experiments on baseline PDE problems, our approaches outperformed other state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call