Abstract

While quantitative precipitation estimation (QPE) using weather radar is widely adopted in operation, precipitation data sets are often highly imbalanced. In particular, extreme precipitation usually lacks representation, which may introduce the bottleneck for radar QPE with machine learning models. Discovering the intrinsic characteristic of extreme precipitation with few samples is challenging. In this letter, we focus on the radar reflectivity data and aim to generate synthetic radar image sequences with respect to extreme precipitation. Considering the relatively long interval between continuous radar images due to radar volume scan, traditional methods in video generation are not suitable. In this letter, we propose Two-stage Generative Adversarial Networks (TsGANs) to address the above-mentioned problem. In general, our TsGAN constructs adversarial process between generators and discriminators: the generator produces samples similar to real data, while the discriminator determines whether or not a sample is eligible. In Stage I, we generate an image sequence containing content and motion features. In Stage II, we design an enhanced net structure to enrich the adversarial processes and further improve the motion features. Experimental testing is performed within the radar coverage in Shenzhen, China, on rainfall events in 2014–2016. Results show that our TsGAN is superior to previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call