Abstract

The reliability of numerical simulations of turbulence depend on our ability to quantify and control discretization errors. In the classical literature on error analysis, typically, simple linear equations are studied. Estimates of errors derived from such analyses depend on the assumption that each dependent variable can be characterized by a unique amplitude and scale of spatial variation that can be normalized to unity. This assumption is not valid for strongly nonlinear problems, such as turbulence, where nonlinear interactions rapidly redistribute energy resulting in the appearance of a broad continuous spectrum of amplitudes. In such situations, the numerical error as well as the subgrid model can change with grid spacing in a complicated manner that cannot be inferred from the results of classical error analysis. In this paper, a formalism for analyzing errors in such nonlinear problems is developed in the context of finite difference approximations for the Navier–Stokes equations when the flow is fully turbulent. Analytical expressions for the power spectra of these errors are derived by exploiting the joint-normal approximation for turbulent velocity fields. These results are applied to large-eddy simulation of turbulence to obtain quantitative bounds on the magnitude of numerical errors. An assessment of the significance of these errors in made by comparing their magnitudes with that of the nonlinear and subgrid terms. One method of controlling the errors is suggested and its effectiveness evaluated through quantitative measures. Although explicit evaluations are presented only for large-eddy simulation, the expressions derived for the power spectra of errors are applicable to direct numerical simulation as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call