Abstract

Abstract Presently, there is no standardized framework or metrics identified to assess regional climate model precipitation output. Because of this, it can be difficult to make a one-to-one comparison of their performance between regions or studies, or against coarser-resolution global climate models. To address this, we introduce the first steps toward establishing a dynamic, yet standardized, benchmarking framework that can be used to assess model skill in simulating various characteristics of rainfall. Benchmarking differs from typical model evaluation in that it requires that performance expectations are set a priori. This framework has innumerable applications to underpin scientific studies that assess model performance, inform model development priorities, and aid stakeholder decision-making by providing a structured methodology to identify fit-for-purpose model simulations for climate risk assessments and adaptation strategies. While this framework can be applied to regional climate model simulations at any spatial domain, we demonstrate its effectiveness over Australia using high-resolution, 0.5° × 0.5° simulations from the CORDEX-Australasia ensemble. We provide recommendations for selecting metrics and pragmatic benchmarking thresholds depending on the application of the framework. This includes a top tier of minimum standard metrics to establish a minimum benchmarking standard for ongoing climate model assessment. We present multiple applications of the framework using feedback received from potential user communities and encourage the scientific and user community to build on this framework by tailoring benchmarks and incorporating additional metrics specific to their application. Significance Statement We introduce a standardized benchmarking framework for assessing the skill of regional climate models in simulating precipitation. This framework addresses the lack of a uniform approach in the scientific community and has diverse applications in scientific research, model development, and societal decision-making. We define a set of minimum standard metrics to underpin ongoing climate model assessments that quantify model skill in simulating fundamental characteristics of rainfall. We provide guidance for selecting metrics and defining benchmarking thresholds, demonstrated using multiple case studies over Australia. This framework has broad applications for numerous user communities and provides a structured methodology for the assessment of model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call