Abstract

A multi-agent deep reinforcement learning (DRL)-based model is presented in this study to reconstruct flow fields from noisy data. A combination of reinforcement learning with pixel-wise rewards, physical constraints represented by the momentum equation and the pressure Poisson equation, and the known boundary conditions is used to build a physics-constrained deep reinforcement learning (PCDRL) model that can be trained without the target training data. In the PCDRL model, each agent corresponds to a point in the flow field and learns an optimal strategy for choosing pre-defined actions. The proposed model is efficient considering the visualisation of the action map and the interpretation of the model operation. The performance of the model is tested by using direct numerical simulation-based synthetic noisy data and experimental data obtained by particle image velocimetry. Qualitative and quantitative results show that the model can reconstruct the flow fields and reproduce the statistics and the spectral content with commendable accuracy. Furthermore, the dominant coherent structures of the flow fields can be recovered by the flow fields obtained from the model when they are analysed using proper orthogonal decomposition and dynamic mode decomposition. This study demonstrates that the combination of DRL-based models and the known physics of the flow fields can potentially help solve complex flow reconstruction problems, which can result in a remarkable reduction in the experimental and computational costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call