Abstract
Abstract. In this paper, a new neural network (NN) algorithm to retrieve the tropospheric ozone column from Ozone Monitoring Instrument (OMI) Level 1b data is presented. Such an algorithm further develops previous studies in order to improve the following: (i) the geographical coverage of the NN, by extending its training set to ozonesonde data from midlatitudes, tropics and poles; (ii) the definition of the output product, by using tropopause pressure information from reanalysis data; and (iii) the retrieval accuracy, by using ancillary data (NCEP tropopause pressure and temperature profile, monthly mean tropospheric ozone column from a satellite climatology) to better constrain the tropospheric ozone retrievals from OMI radiances. The results indicate that the algorithm is able to retrieve the tropospheric ozone column with a root mean square error (RMSE) of about 5–6 DU in all the latitude bands. The design of the new NN algorithm is extensively discussed, validation results against independent ozone soundings and chemistry/transport model (CTM) simulations are shown, and other characteristics of the algorithm – i.e., its capability to detect non-climatological tropospheric ozone situations and its sensitivity to the tropopause pressure – are discussed.
Highlights
The neural network (NN), by extending its training set to ozonesonde data from where it shields the troposphere from harmful ultraviolet ramidlatitudes, tropics and poles; (ii) the definition of the out- diation
The results obtained over the whole Dtest set are shown in Fig. 2, where the performances of the algorithm are summarized through the mean bias, the root mean square error (RMSE) and the Pearson correlation coefficient between the reference values of TCO and those retrieved by the NN
Separate error statistics were computed for the Dtest1, containing examples pertaining to the period between 2004 and 2008 and the Dtest2 sets, consisting of examples acquired after 2008
Summary
NNs can be considered as algorithms for nonlinear regression and function approximation. Several types of NNs can be devised, they share a number of common characteristics: (i) the computation is distributed among elementary units (called neurons), and (ii) the relationship to be approximated is learned by the NN from a training dataset. Where W is a set of free parameters to be adjusted from a training dataset. The adjustment of the free parameters is made according to a learning algorithm, which basically consists of an iterative minimization of an error cost function of the kind. According to the exact definition of the cost function and to the choice of the iterative method chosen for its minimization, several learning algorithms can be defined. The reader can refer to Bishop (1995) or Haykin (1999) for more detailed information
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have