Abstract
Deep neural networks (DNNs) have recently shown great potential in solving partial differential equations (PDEs). The success of neural network-based surrogate models is attributed to their ability to learn a rich set of solution-related features. However, learning DNNs usually involves tedious training iterations to converge and requires a very large number of training data, which hinders the application of these models to complex physical contexts. To address this problem, we propose to apply the transfer learning approach to DNN-based PDE solving tasks. In our work, we create pairs of transfer experiments on Helmholtz and Navier-Stokes equations by constructing subtasks with different source terms and Reynolds numbers. We also conduct a series of experiments to investigate the degree of generality of the features between different equations. Our results demonstrate that despite differences in underlying PDE systems, the transfer methodology can lead to a significant improvement in the accuracy of the predicted solutions and achieve a maximum performance boost of 97.3% on widely used surrogate models.
Highlights
Neural network (NN) has been extensively studied as a surrogate model in the field of physics simulations for many years [1, 2]
We investigate the generality versus specificity of neurons in each layer of Deep neural networks (DNNs)-based surrogate models
3 Experimental results In this study, we evaluate the performance of transfer learning on the neural networkbased partial differential equations (PDEs) solving tasks
Summary
Neural network (NN) has been extensively studied as a surrogate model in the field of physics simulations for many years [1, 2]. Yosinski et al [14] experimentally studied the generality versus specificity of learned features in each network layer for different training tasks.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have