Abstract. Many applications of geophysical data – whether from surface observations, satellite retrievals, or model simulations – rely on aggregates produced at coarser spatial (e.g. degrees) and/or temporal (e.g. daily and monthly) resolution than the highest available from the technique. Almost all of these aggregates report the arithmetic mean and standard deviation as summary statistics, which are what data users employ in their analyses. These statistics are most meaningful for normally distributed data; however, for some quantities, such as aerosol optical depth (AOD), it is well-known that distributions are on large scales closer to log-normal, for which a geometric mean and standard deviation would be more appropriate. This study presents a method of assessing whether a given sample of data is more consistent with an underlying normal or log-normal distribution, using the Shapiro–Wilk test, and tests AOD frequency distributions on spatial scales of 1∘ and daily, monthly, and seasonal temporal scales. A broadly consistent picture is observed using Aerosol Robotic Network (AERONET), Multiangle Imaging SpectroRadiometer (MISR), Moderate Resolution Imagining Spectroradiometer (MODIS), and Goddard Earth Observing System Version 5 Nature Run (G5NR) data. These data sets are complementary: AERONET has the highest AOD accuracy but is sparse, and MISR and MODIS represent different satellite retrieval techniques and sampling. As a model simulation, G5NR is spatiotemporally complete. As timescales increase from days to months to seasons, data become increasingly more consistent with log-normal than normal distributions, and the differences between arithmetic- and geometric-mean AOD become larger, with geometric mean becoming systematically smaller. Assuming normality systematically overstates both the typical level of AOD and its variability. There is considerable regional heterogeneity in the results: in low-AOD regions such as the open ocean and mountains, often the AOD difference is small enough (<0.01) to be unimportant for many applications, especially on daily timescales. However, in continental outflow regions and near source regions over land, and on monthly or seasonal timescales, the difference is frequently larger than the Global Climate Observation System (GCOS) goal uncertainty in a climate data record (the larger of 0.03 or 10 %). This is important because it shows that the sensitivity to an averaging method can and often does introduce systematic effects larger than the total goal GCOS uncertainty. Using three well-studied AERONET sites, the magnitude of estimated AOD trends is shown to be sensitive to the choice of arithmetic vs. geometric means, although the signs are consistent. The main recommendations from the study are that (1) the distribution of a geophysical quantity should be analysed in order to assess how best to aggregate it, (2) ideally AOD aggregates such as satellite level 3 products (but also ground-based data and model simulations) should report a geometric-mean or median AOD rather than (or in addition to) arithmetic-mean AOD, and (3) as this is unlikely in the short term due to the computational burden involved, users can calculate geometric-mean monthly aggregates from widely available daily mean data as a stopgap, as daily aggregates are less sensitive to the choice of aggregation scheme than those for monthly or seasonal aggregates. Furthermore, distribution shapes can have implications for the validity of statistical metrics often used for comparison and evaluation of data sets. The methodology is not restricted to AOD and can be applied to other quantities.
Read full abstract