Physics-Informed Neural Networks (PINNs) are artificial neural networks that encode Partial Differential Equations (PDEs) as an integral component of the ML model. PINNs are successfully used nowadays to solve PDEs, fractional equations, and integral–differential equations, including direct and inverse problems. Just as in the case of other kinds of artificial neural networks, the architecture, including the number and sizes of layers, activation functions, and other hyperparameters can significantly influence the network performance. Despite the serious work in this field, there are still no clear directions on how to choose an optimal network architecture in a consistent manner. In practice, expertise is required, with a significant number of manual trial and error cycles. In this paper, we propose PINN/GA (PINN/Genetic Algorithm), a fully automatic design of a PINN by an evolutionary strategy with specially tailored operators of selection, crossover, and mutation, adapted for deep neural network architecture and hyperparameter search. The PINN/GA strategy starts from the population of simple PINNs, adding new layers only if it brings clear accuracy benefits, keeping PINNs in the population as simple as possible. Since the examination of dozens of neural networks through the evolutionary process implies enormous computational costs, it employs a scalable computational design based on containers and Kubernetes batching orchestration. To demonstrate the potential of the proposed approach, we chose two non-trivial direct problems. The first is 1D Stefan transient model with time-dependent Dirichlet boundary conditions, describing the melting process, and the second is the Helmholtz wave equation over a 2D square domain. The authors found that PINNs accuracy gradually improves throughout the evolutionary process, exhibiting better performance and stability than parallel random search and Hyperopt Tree of Parzen Estimators, while keeping the network design reasonably simple.
Read full abstract