Abstract

Physics-informed neural networks formulation allows the neural network to be trained by both the training data and prior domain knowledge about the physical system that models the data. In particular, it has a loss function for the data and the physics, where the latter is the deviation from a partial differential equation describing the system. Conventionally, both loss functions are combined by a weighted sum, whose weights are usually chosen manually. It is known that balancing between different loss terms can make the training process more efficient. In addition, it is necessary to find the optimal architecture of the neural network in order to find a hypothesis set in which is easier to train the PINN. In our work, we propose a multi-objective optimization approach to find the optimal value for the loss function weighting, as well as the optimal activation function, number of layers, and number of neurons for each layer. We validate our results on the Poisson, Burgers, and advection-diffusion equations and show that we are able to find accurate approximations of the solutions using optimal hyperparameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call