Abstract

Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial-temporal sequence models for the task. However, they suffer from either oversimplified model assumptions or blurry predictions and sequential error accumulation issue, for a long-term forecast requirement. In this paper, we propose a novel Multi-Scale Time Conditional Generative Adversarial Network (MSTCGAN). To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input. In addition, a powerful multi-scale generator is designed with the multi-head axial attention, which helps to carefully preserve the fine-grained details for appearance consistency. Moreover, we develop a temporal discriminator to address the blurry issue and maintain the motion consistency in prediction. Extensive experiments have been conducted on FengYun-4A satellite data set, and the results demonstrate the effectiveness and superiority of the proposed method over state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call