Abstract

This paper concerns an investigation of the invariance and consistency of deep learning of turbulent pressure fluctuations. The long-short-memory model is employed to predict wall pressure fluctuations across physical regimes featuring turbulence, shock–boundary layer interaction, and separation. The model's sensitivity to the data inputs is examined using different input data sets. Training the deep learning model based on the raw signals from different flow regions leads to large inaccuracies. It is shown that the data must be appropriately pre-processed before training for the deep learning model predictions to become consistent. Removing the mean and using the normalized fluctuating component of the signal, the deep learning predictions not only greatly improved in accuracy but, most importantly, converged and became consistent, provided that the signal sparsity remains within the inertial sub-range of the turbulence energy spectrum cascade. The power spectra of the surface pressure fluctuations reveal that the model provides high accuracy up to a certain frequency for the fully turbulent flow. The deep learning model's consistency is evidenced by being transferable across the various probe positions on the wall despite the significant differences in the turbulent flow properties in the training data set, i.e., signals obtained before, after, and inside the shock–boundary layer interaction regions. The model's prediction consistency and invariance to the turbulent signal training location(s) are promising for applying deep learning models to various turbulent flows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call