Abstract

Abstract. Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith–sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-)DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the comparisons with ZSL-DOAS instruments at high latitudes, especially in the presence of a polar vortex due to the strong TOC gradient it induces. At tropical latitudes, where TOC variability is lower, both types of errors remain below about 1 % and consequently do not contribute significantly to the comparison error budget. The detailed analysis of the comparison results, including the metrological errors, suggests that the published random measurement uncertainties for GODFITv3 reprocessed satellite data are potentially overestimated, and adjustments are proposed here. This successful application of the OSSSMOSE system to close for the first time the error budget of TOC comparisons, bodes well for potential future applications, which are briefly touched upon.

Highlights

  • Compliance of essential climate variable (ECV) records obtained from satellite platforms with user requirements such as those formulated within the Global Climate Observing System (GCOS) framework, is usually assessed through validation studies

  • The Observing System Simulation Experiments (OSSEs) manages to qualitatively reproduce this behaviour, both in comparison median and spread, for the better part of the time series. This did require the use of an assumed Système d’Analyse par Observation Zénithale (SAOZ) measurement uncertainty of 2 %, which is considerably larger than the DOAS fitting uncertainties provided with the NDACC data files but far smaller than the 4.7 % precision derived by Hendrick et al (2011)

  • The error budget of total ozone column ground-based validation work was analyzed in detail, including for the first time the errors due to the interplay of both sampling and smoothing differences between the satellite and ground-based measurements, and an inhomogeneous and variable ozone field

Read more

Summary

Introduction

Compliance of essential climate variable (ECV) records obtained from satellite platforms with user requirements such as those formulated within the Global Climate Observing System (GCOS) framework, is usually assessed through validation studies. These include as a key component the compari-. Keppens et al, 2015, this issue, for a detailed protocol) In these validation exercises, a compromise must be made between, on the one hand, abundance of comparison pairs, and on the other hand, non-instrumental comparison errors due to non-perfect co-location in space and time between satellite and ground-based measurements.

Error budget of a data comparison
An Observing System Simulation Experiment
Total ozone column validation as a topical case study
Satellite data
Ground-based network data
Direct-sun instruments
ZSL-DOAS instruments
Metrology simulator
Module 1: data and metadata
Module 2: air mass descriptor
Module 3: observation simulator
Measurement simulation
Module 4: comparison simulator
Case studies
Co-located measurements and measurement footprints
Observed and modelled TOC time series
Comparison error budget: observed and simulated
Error distributions
Different co-location criteria
Choice of modelled fields
Spread of the differences
Median of the differences
Zenith–sky instruments
Findings
Conclusions and prospects
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call