Abstract

Missing data in weather radar image sequences may cause bias in quantitative precipitation estimation (QPE) and quantitative precipitation forecast (QPF) studies, and also the obtainment of corresponding high-quality QPE and QPF products. The traditional approaches that are used to reconstruct missing weather radar images replace missing frames with the nearest image or with interpolated images. However, the performance of these approaches is defective, and their accuracy is quite limited due to neglecting the intensification and disappearance of radar echoes. In this study, we propose a deep neuron network (DNN), which combines convolutional neural networks (CNNs) and bi-directional convolutional long short-term memory networks (CNN-BiConvLSTMs), to address this problem and establish a deep-learning benchmark. The model is trained to be capable of dealing with arbitrary missing patterns by using the proposed training schedule. Then the performances of the model are evaluated and compared with baseline models for different missing patterns. These baseline models include the nearest neighbor approach, linear interpolation, optical flow methods, and two DNN models three-dimensional CNN (3DCNN) and CNN-ConvLSTM. Experimental results show that the CNN-BiConvLSTM model outperforms all other baseline models. The influence of data quality on interpolation methods is further investigated, and the CNN-BiConvLSTM model is found to be basically uninfluenced by less qualified input weather radar images, which reflects the robustness of the model. Our results suggest good prospects for applying the CNN-BiConvLSTM model to improve the quality of weather radar datasets.

Highlights

  • It was found that the performances of the convolutional neural networks (CNNs)-BiConvLSTM model were the best among all the models and for all evaluation metrics and missing patterns involved

  • Considering identical network structures between CNN-BiConvLSTM and CNN-ConvLSTM except for the bidirectional structure, it can be inferred that the spatial–temporal information in both directions is quite important

  • When features from subsequent radar images were taken into account in the CNN-ConvLSTM model (i.e., CNN-BiConvLSTM), the performances were superior to those of the 3DCNN model

Read more

Summary

Introduction

Despite the successful application of the optical flow method in the interpolation of radar and precipitation data, it cannot overcome the basic assumption that the intensity of the features remains constant [12] This assumption is not appropriate because of the intensification and disappearance of radar echoes. The main advantage is the deep neuron network (DNN) learns motions from a radar image sequence, rather than taking account of only two frames as optical flow methods usually does [12]. Shi et al [28] propose a convolutional long short term memory (ConvLSTM) network to improve precipitation nowcasting Another RNN-based model, namely the trajectory gated recurrent unit (TrajGRU), was introduced in a subsequent study and it shows higher accuracy [38].

CNN-BiConvLSTM Model
Traditional Baseline Models
Basic DNNs
Unlike
Dataset
Experimental Configuration
Evaluation
Evaluation and Comparison therespect
10. Evaluation
Influence of Data Quality and Model Size
Conclusions and Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call