Abstract

AbstractThis paper presents a new nonlinear non‐intrusive reduced‐order model (NL‐NIROM) that outperforms traditional proper orthogonal decomposition (POD)‐based reduced order model (ROM). This improvement is achieved through the use of auto‐encoder (AE) and self‐attention based deep learning methods. The novelty of this work is that it uses stacked auto‐encoder (SAE) network to project the original high‐dimensional dynamical systems onto a low dimensional nonlinear subspace and predict fluid dynamics using an self‐attention based deep learning method. This paper introduces a new model reduction neural network architecture for fluid flow problem, as well as, a linear non‐intrusive reduced order model (L‐NIROM) based on POD and self‐attention mechanism. In the NL‐NIROM, the SAE network compresses high‐dimensional physical information into several much smaller sized representations in a reduced latent space. These representations are expressed by a number of codes in the middle layer of SAE neural network. Then, those codes at different time levels are trained to construct a set of hyper‐surfaces using self‐attention based deep learning methods. The inputs of the self‐attention based network are previous time levels' codes and the outputs of the network are current time levels' codes. The codes at current time level are then projected back to the original full space by the decoder layers in the SAE network. The capability of the new model, NL‐NIROM, is demonstrated through two test cases: flow past a cylinder, and a lock exchange. The results show that the NL‐NIROM is more accurate than the popular model reduction method namely POD based L‐NIROM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call