Building accurate and generalizable machine-learning models requires large training datasets. In aerodynamics, quantities of interest are typically governed by complex, non-linear mechanisms in which neural networks are well-suited to address. However, the acquisition of large, high-fidelity datasets from either simulations or experiments can be expensive. In this work, a transfer-learning framework is explored to reduce the reliance on these expensive datasets by exploiting the cost-effectiveness of low-fidelity analyses in constructing extensive datasets, such as the inviscid panel method. By first developing robust base networks from inviscid distributions, target networks can “learn” by simply transferring relevant embedded features to facilitate the modelling of high-fidelity distributions, instead of solely relying on its access to high-fidelity samples. Assessment of the framework reveals performance gains over conventional training schemes in (1) fidelity enhancement from inviscid to high-fidelity pressure distributions; (2) generalizing prior knowledge to learn adjacent skin friction properties even without a low-fidelity equivalent; (3) extrapolation to yet-to-be seen operating conditions. Under conditions of limited high-fidelity samples, test MSE evaluations can be improved by magnitudes of up to 102, 101, and 102 for the three respective tasks. As such, these findings motivate further investigations to support data-scarce surrogate modelling in more empirical settings.
Read full abstract