Abstract

Abstract. The evaluation of models in general is a nontrivial task and can, due to epistemological and practical reasons, never be considered complete. Due to this incompleteness, a model may yield correct results for the wrong reasons, i.e., via a different chain of processes than found in observations. While guidelines and strategies exist in the atmospheric sciences to maximize the chances that models are correct for the right reasons, these are mostly applicable to full physics models, such as numerical weather prediction models. The Intermediate Complexity Atmospheric Research (ICAR) model is an atmospheric model employing linear mountain wave theory to represent the wind field. In this wind field, atmospheric quantities such as temperature and moisture are advected and a microphysics scheme is applied to represent the formation of clouds and precipitation. This study conducts an in-depth process-based evaluation of ICAR, employing idealized simulations to increase the understanding of the model and develop recommendations to maximize the probability that its results are correct for the right reasons. To contrast the obtained results from the linear-theory-based ICAR model to a full physics model, idealized simulations with the Weather Research and Forecasting (WRF) model are conducted. The impact of the developed recommendations is then demonstrated with a case study for the South Island of New Zealand. The results of this investigation suggest three modifications to improve different aspects of ICAR simulations. The representation of the wind field within the domain improves when the dry and the moist Brunt–Väisälä frequencies are calculated in accordance with linear mountain wave theory from the unperturbed base state rather than from the time-dependent perturbed atmosphere. Imposing boundary conditions at the upper boundary that are different to the standard zero-gradient boundary condition is shown to reduce errors in the potential temperature and water vapor fields. Furthermore, the results show that there is a lowest possible model top elevation that should not be undercut to avoid influences of the model top on cloud and precipitation processes within the domain. The method to determine the lowest model top elevation is applied to both the idealized simulations and the real terrain case study. Notable differences between the ICAR and WRF simulations are observed across all investigated quantities such as the wind field, water vapor and hydrometeor distributions, and the distribution of precipitation. The case study indicates that the precipitation maximum calculated by the ICAR simulation employing the developed recommendations is spatially shifted upwind in comparison to an unmodified version of ICAR. The cause for the shift is found in influences of the model top on cloud formation and precipitation processes in the ICAR simulations. Furthermore, the results show that when model skill is evaluated from statistical metrics based on comparisons to surface observations only, such an analysis may not reflect the skill of the model in capturing atmospheric processes like gravity waves and cloud formation.

Highlights

  • All numerical models of natural systems are approximations to reality

  • This study aims to improve the understanding of the Intermediate Complexity Atmospheric Research (ICAR) model and develop recommendations that maximize the probability that the results of ICAR simulations, such as the spatial distribution of precipitation, are correct and caused by the physical processes modeled by ICAR and not by numerical artifacts or any influence of the model top

  • ICAR-N and ICAR-O simulations were run with ztop = 20.4 km and zero-gradient boundary conditions (BC code 000)

Read more

Summary

Introduction

All numerical models of natural systems are approximations to reality. They generate predictions that may further the understanding of natural processes and allow the model to be tested against measurements. J. Horak et al.: A process-based evaluation of ICAR. A model prediction that disagrees with a measurement falsifies the model, thereby indicating, for instance, issues with the underlying assumptions. From a practical point of view, the incompleteness and scarcity of data, as well as the imperfections of observing systems place further limits on the verifiability of models. The same limitations apply to model evaluation as well. Evaluation focuses on establishing the reliability of a model rather than its truth

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call