Abstract
Abstract. The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. This paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments will also be facilitated by the sharing of diagnostic codes via this catalogue.Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.
Highlights
Cloud feedback remains the largest source of uncertainty associated with estimates of climate sensitivity using current global climate models
The seasonal variation of the normalized SCRE (NSCRE) is relatively well simulated by the models in the tropics: the largest inter-model spread is in the stratocumulus regime and the main differences between models relate to variations in the amplitude of the seasonal cycle
We have described the metrics and diagnostics that are currently available in the CFMIP Diagnostic Codes Catalogue and have provided examples of their application to model evaluation
Summary
Cloud feedback remains the largest source of uncertainty associated with estimates of climate sensitivity using current global climate models. A range of methodologies, metrics and diagnostics have been developed, many of which utilize information on clouds derived from the observational simulators Use of these tools has led to considerable progress being made in understanding the uncertainties and errors associated with GCM cloud simulations over the last decade. The US CLIVAR MJO Working Group (Waliser et al, 2009) produced a collection of diagnostics and metrics for CMIP5 which reflected shortcomings in the representation of processes that may be relevant to the simulation of the MJO (https://www.ncl.ucar.edu/ Applications/mjoclivar.shtml) These scripts were applied to CMIP5 models to evaluate their simulations of various aspects of the Madden–Julian Oscillation (e.g. climate variability and predictability; see Waliser et al, 2009; Kim et al, 2009, 2014).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.