Physics-informed neural networks (PINNs) have recently become a viable modelling method for the scientific machine-learning community. The appeal of this network architecture lies in the coupling of a deep neural network (DNN) with partial differential equations (PDEs): the DNN can be considered a universal function approximator, and the physical knowledge from the embedded PDEs regularises the network during training. This regularisation improves robustness of the network and means the network only requires sparse training data in the domain. We apply PINNs with embedded Reynolds-averaged Navier–Stokes (RANS) equations to a spatially developing adverse-pressure-gradient (APG) boundary layer 1000<Reθ<3000, and also to the periodic hill problem 5600≤Reb≤37,000. We do not use a turbulence model to close the RANS equations, rather the network infers the Reynolds-stress fields during training as a means to satisfy closure. As PINNs are able to produce robust predictions with sparse training data, this paper demonstrates how PINNs can be trained with data typical of experimental campaigns. For the APG boundary layer, we seek to quantify the amount and location of data required to model a time-averaged turbulent flow with PINNs, and show that PINNs can accurately model wall shear stress and wall pressure. We demonstrate how PINNs perform when trained without information of a critical feature of the flow (such as a separation bubble), and we highlight how the cost of training the network does not increase with Reynolds number. Finally, we undertake a series of benchmarks to demonstrate how the method scales with network size and the number of residual points in the problem. This work seeks to establish the quantity and type of data required to successfully apply PINNs to time-averaged flow without a closure model, and seeks to set expectations on the prediction quality of the network when trained on an available data set.
Read full abstract