Abstract

This work aims at making the prediction process of neural network-based turbulence models more transparent. Due to its black-box ingredients, the model’s predictions cannot be anticipated. Therefore, this paper is concerned with the quantification of each feature’s importance for the prediction of trained and fixed NNs, which is one possible type of explanation for opaque models. Two conceptually different attribution methods, namely permutation feature importance and DeepSHAP, are chosen in order to assess global, regional and local feature importance. The neuralSST turbulence model, which serves as an example, will be investigated in greater detail. While the global importance scores provide a quick and reliable way to detect irrelevant features and may thus be used for feature selection, only the (semi-)local analysis provides meaningful and trustworthy interpretations of the model. In fact, the local importance scores suggest that hypotheses with a common high-level influence on the turbulence model, e.g. adjusting the net production of turbulent kinetic energy or the Reynolds stress anisotropy, are similarly affected by local mean flow structures such as attached boundary layers, free shear layers or recirculation zones.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.