Abstract

Neural networks have been widely used to provide retrievals of geophysical parameters from spectral radiance measurements made remotely by air-, ground-, and space-based sensors. The advantages of retrievals based on neural networks include speed of execution, simplicity of the trained algorithm, and ease of error analysis, and the proliferation of high quality training data sets derived from models and/or operational measurements has further facilitated their use. In this article, we provide examples of geophysical retrieval algorithms based on neural networks with a focus on Jacobian analysis. We examine a hypothetical 80-channel hyperspectral microwave atmospheric sounder (HyMAS) and construct examples comparing neural network water vapor retrieval performance with simple regressions. Jacobians (derivatives of the outputs with respect to the network weights and with respect to the inputs) are also presented and discussed. Finally, a discussion of the Jacobian operating points is provided.

Highlights

  • Three-dimensional (3D) measurements of the Earth’s surface and atmospheric thermodynamic state have been made indirectly from satellite measurements for many years [1,2]. These measurements are inferred from direct observations of upwelling thermal emission and scattered radiance in microwave and infrared spectral regions, typically near the peaks and troughs of atmospheric absorption lines due largely to molecular oxygen, water vapor, and carbon dioxide

  • We focus on feedforward multilayer perceptron (FFMLP) neural networks due to their simplicity, flexibility, and ease of use

  • Summary Neural network retrievals of atmospheric water vapor were shown to substantially outperform linear regression retrievals for an 80-channel hyperspectral microwave sounding system based on a global simulation using the NOAA88b profile dataset

Read more

Summary

Introduction

Three-dimensional (3D) measurements of the Earth’s surface and atmospheric thermodynamic state (temperature, moisture, pressure, precipitation, and so forth) have been made indirectly from satellite measurements for many years [1,2]. Perhaps the most useful attribute of neural nets is their scalability; a network with a sufficient number of weights and biases is capable of approximating a bounded, continuous function to an arbitrary level of precision over a finite domain [30]. The layers between the input layer and the output layer are called hidden layers and usually contain nonlinear elements The various types of feedforward neural networks differ primarily in the nonlinear functions (the so-called activation functions) that are used in the hidden layer nodes and the training algorithms that are used to optimize the free parameters of the network. The simple form of sigmoidal function and its derivative allows fast and accurate calculation of the gradients needed to optimize selection of the weights and biases and carry out second-order error analysis

Feedforward multilayer perceptron neural networks
Ocean surface emissivity model
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.