Abstract

Model–consistent training has become trending for data-driven turbulence modeling since it can improve model generalizability and reduce data requirements by involving the Reynolds–averaged Navier–Stokes (RANS) equation during model learning. Neural networks are often used for the Reynolds stress representation due to their great expressive power, while they lack interpretability for the causal relationship between model inputs and outputs. Some post–hoc methods have been used to explain the neural network by indicating input feature importance. However, for the model–consistent training, the model explainability involves the analysis of both the neural network inputs and outputs. That is, the effects of model output on the RANS predictions should also be explained in addition to the input feature analysis. In this work, we investigate the explainability of the model–consistent learned model for the internal flow prediction of NASA Rotor 37 at its peak efficiency operating condition. The neural–network–based corrections for the Spalart–Allmaras turbulence model are learned from various experimental data based on the ensemble Kalman method. The learned model can noticeably improve the velocity prediction near the shroud. The explainability of the trained neural network is analyzed in terms of the model correction and the input feature importance. Specifically, the learned model correction increases the local turbulence production in the vortex breakdown region due to non–equilibrium effects, which capture the blockage effects near the shroud. Besides, the ratio of production to destruction and the helicity are shown to have relatively high importance for accurately predicting the compressor rotor flows based on the Shapley additive explanations method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call