Abstract

Precipitation forecasting with Typhoons (especially nowcasting), a short-term (up to two hours) high-resolution forecasting, is arguably one of the most demanding tasks. Traditional forecasting methods contain 1) Ensemble numerical weather prediction (NWP) systems and 2) advect precipitation fields with radar-based wind estimates via optical flow techniques. The former simulates coupled physical equations of the atmosphere to generate multiple precipitation forecasts. In the latter methods, motion fields are estimated by optical flow, smoothness penalties are used to approximate an advection forecast, and stochastic perturbations are added to the motion field and intensity model. However, these methods either do not meet the requirement on time or rely on the advection equation. These drawbacks limit the performance of precipitation forecasting. Satellite imagery benefits from machine learning technologies, e.g., deep learning, which can be regarded as video frames and is expected to be a promising approach to solving precipitation nowcasting tasks. Convolutional neural networks (CNN), recurrent neural networks (RNN), and their combination are used to generate future frames with the previous context frames. In general, CNN is employed to capture spatial dependencies, while RNN aims to capture temporal dependencies. However, CNN suffers from inductive bias (i.e., translation invariance and locality), which cannot capture location-variant information (i.e., natural motion and transformation) and fails to extract long-range dependencies. As for RNN, the process of long back-propagation is time-consuming because of its recurrent structure. Therefore, the above drawbacks lack these methods’ operational utility and can not provide skillful precipitation forecasting. This work proposes a fire-new artificial intelligence model to achieve skillful precipitation forecasting with Typhoons. The satellite Imagery containing precipitation is made into a series of sequences, each containing multiple frames over time. We re-design the traditional CNN-RNN-based architecture that can solve the problem of information loss/forgetting and provide skillful precipitation forecasting. Furthermore, we introduce the generative adversarial strategy and propose a novel random-patch loss function. It ensures that the model can generate high-fidelity precipitation forecasting. In summary, our proposed model simplifies the complex TC precipitation forecasting into a video prediction problem, greatly avoiding many uncertainties in the physical process and facilitating a fully data-driven artificial intelligence paradigm using deep learning and satellite image sequencing for discovering insights for weather forecasting-related sciences.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.