Abstract

This article presents two new state-of-the-art spatial rain field interpolation convolutional neural networks (SRFICNNs), referred to as learned deviation (LD) and learned interpolation (LI) models, for predicting the point rain rate at finer spatial scales. The main contribution is the successful introduction of the prior-art deep learning technique into high-resolution (HR) rainfall rate prediction with significant improvement in accuracy. This is very important for the effective implementation of fade mitigation techniques for both terrestrial and satellite networks. The comparison of the models’ performances with ground truth (radar measurements) shows that the proposed models give an excellent mean square error (MSE) and structural SIMilarity (SSIM) in rainfall field reconstruction if the network depth falls in the range of 15–25 weight layers. The final model uses 20 layers for HR point rain rate prediction. Further study shows that the LD model offers a faster convergence and yields a more accurate rain rate prediction. In particular, this article compares the rain rate exceedance distribution and Log-Normality property from the model estimates with values calculated from measured data. Results show that the LD model gives a highly accurate estimate of these two indices with the corresponding root mean square (rms) error of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$5.1709\times 10^{-4}$ </tex-math></inline-formula> and 0.0013, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call