Abstract

Aiming at fast and accurate spatiotemporal prediction of interfacial flow fields, a novel deep learning model combining Convolutional AutoEncoder (CAE) and long short-term memory with spatial and temporal attention (LSTM-STA) is proposed and named as CAE-LSTM-STA (hybrid model of CAE and LSTM-STA) in this article. To enable a fast calculation process, CAE is first utilized to compress the high-dimensional snapshots of flow fields into the low-dimensional latent space by its encoder. Then, the latent space serves as the input feature for LSTM-STA to temporally evolve the state of the low-dimensional latent space. Finally, the low-dimensional latent space at future time steps is fed back into the decoder of CAE to derive the full-order snapshots of the flow fields. The prediction performance of the proposed model is evaluated via two representative benchmark cases, including the dam break case and the rising bubble case. The capability of CAE in dimension reduction is found to be much better than that of the widely used Proper Orthogonal Decomposition, while LSTM-STA is observed that outperforms the original LSTM in multivariant temporal prediction. These promising results indicate that the proposed CAE-LSTM-STA model is able to effectively capture and advance the spatiotemporal characteristics of the interfacial flow fields, therefore making it an advanced surrogate model for fast and precise generation of the temporally continuous interfacial flow fields.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call