Abstract

A cloud image can provide significant information, such as precipitation and solar irradiation. Predicting short-time cloud motion from images is the primary means of making intra-hour irradiation forecasts for solar-energy production and is also important for precipitation forecasts. However, it is very challenging to predict cloud motion (especially nonlinear motion) accurately. Traditional methods of cloud-motion prediction are based on block matching and the linear extrapolation of cloud features; they largely ignore nonstationary processes, such as inversion and deformation, and the boundary conditions of the prediction region. In this paper, the prediction of cloud motion is regarded as a spatiotemporal sequence-forecasting problem, for which an end-to-end deep-learning model is established; both the input and output are spatiotemporal sequences. The model is based on gated recurrent unit (GRU)- recurrent convolutional network (RCN), a variant of the gated recurrent unit (GRU), which has convolutional structures to deal with spatiotemporal features. We further introduce surrounding context into the prediction task. We apply our proposed Multi-GRU-RCN model to FengYun-2G satellite infrared data and compare the results to those of the state-of-the-art method of cloud-motion prediction, the variational optical flow (VOF) method, and two well-known deep-learning models, namely, the convolutional long short-term memory (ConvLSTM) and GRU. The Multi-GRU-RCN model predicts intra-hour cloud motion better than the other methods, with the largest peak signal-to-noise ratio and structural similarity index. The results prove the applicability of the GRU-RCN method for solving the spatiotemporal data prediction problem and indicate the advantages of our model for further applications.

Highlights

  • Cloud-motion prediction has received significant attention because of its importance for the prediction of both precipitation and solar-energy availability [1]

  • variational optical flow (VOF), gated recurrent unit (GRU), Long short-term memory (LSTM), ConvLSTM, and Multi-GRU-recurrent convolutional network (RCN) on the test data for each day are compared in Figures 6 and 7, respectively

  • The relationship of GRU, LSTM, ConvLSTM, GRU‐RCN, and Multi‐GRU‐RCN is illustrated in RCN has less parameters than ConvLSTM

Read more

Summary

Introduction

Cloud-motion prediction has received significant attention because of its importance for the prediction of both precipitation and solar-energy availability [1]. Shakya and Kumar [27] applied a fractional-order optical-flow method to cloud-motion estimation and used extrapolations based on advection and anisotropic diffusion to make predictions. Deep CNN has performed excellently when dealing with spatial data, it discards temporal information [34] that provides important clues in the forecasting of cloud motion. We need to modify the structure of the GRU-RCN model and apply it directly on the pixel level There exists another challenge in the cloud motion prediction problem: new clouds often appear suddenly, at the boundary. Using a database of FenYun-2G IR satellite images, we compare our model’s intra-hour predictions to those of the state-of-the-art variational optical-flow (VOF) method and three deep learning models (ConvLSTM, LSTM, and GRU); our model performs better than the other methods.

Deep CNN
GRU-RCN
Multi-GRU-RCN Model
Outline
Experimental Setup
TestThe
Results and and Analysis
Method
Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call