To improve precipitation estimation accuracy, new methods, which are able to merge different precipitation measurement modalities, are necessary. In this study, we propose a deep learning method to merge rain gauge measurements with a ground-based radar composite and thermal infrared satellite imagery. The proposed convolutional neural network, composed of an encoder–decoder architecture, performs a multiscale analysis of the three input modalities to estimate simultaneously the rainfall probability and the precipitation rate value with a spatial resolution of 2 km. The training of our model and its performance evaluation are carried out on a dataset spanning 5 years from 2015 to 2019 and covering Belgium, the Netherlands, Germany and the North Sea. Our results for instantaneous precipitation detection, instantaneous precipitation rate estimation, and for daily rainfall accumulation estimation show that the best accuracy is obtained for the model combining all three modalities. The ablation study, done to compare every possible combination of the three modalities, shows that the combination of rain gauges measurements with radar data allows for a considerable increase in the accuracy of the precipitation estimation, and the addition of satellite imagery provides precipitation estimates where rain gauge and radar coverage are lacking. We also show that our multi-modal model significantly improves performance compared to the European radar composite product provided by OPERA and the quasi gauge-adjusted radar product RADOLAN provided by the DWD for precipitation rate estimation.
Read full abstract