Abstract

In this paper, we introduce a data-driven machine learning framework for improving the accuracy of wind plant flow models by learning turbulence model corrections based on data from higher-fidelity simulations. First, a high-dimensional PDE-constrained optimization problem is solved using gradient-based optimization with adjoints to determine optimal eddy viscosity fields that improve the agreement of a medium-fidelity Reynolds-Averaged Navier Stokes (RANS) model with large eddy simulations (LES). A supervised learning problem is then constructed to find general, predictive representations of the optimal turbulence closure. A machine learning technique using Gaussian process regression is trained to predict the eddy viscosity field based on local RANS flow field information like velocities, pressures, and their gradients. The Gaussian process is trained on LES simulations of a single turbine and implemented in a wind plant simulation with 36 turbines. We show improvement over the baseline RANS model with the machine learning correction, and demonstrate the ability to provide accurate confidence levels for the corrections that enable future uncertainty quantification studies.

Highlights

  • Effective wind plant flow modeling requires balancing competing objectives in terms of physical fidelity and computational cost

  • We develop a data-driven machine learning framework to improve the performance of WindSE [6], a medium-fidelity steady-state ReynoldsAveraged Navier Stokes (RANS) models with 2D and 3D capabilities and nonrotating actuator disk turbine representations

  • This eddy viscosity should produce the best agreement with large eddy simulations (LES) regardless of which RANS model produces it, i.e. a mixing length model, a k − model, or a Gaussian process machine learning model

Read more

Summary

Introduction

Effective wind plant flow modeling requires balancing competing objectives in terms of physical fidelity and computational cost. Our approach solves a high-dimensional optimization problem to find optimal corrections to turbulence closure models that produce better agreement with data from high-fidelity simulations. A nonparametric machine learning model, using Gaussian process regression, is trained to predict the optimal corrections.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call