In response to the shortcomings of current spatiotemporal prediction models, which frequently encounter difficulties in temporal feature extraction and the forecasting of medium to high echo intensity regions over extended sequences, this study presents a novel model for radar echo extrapolation that combines a translator encoder-decoder architecture with a spatiotemporal dual-discriminator conditional generative adversarial network (STD-TranslatorNet). Initially, an image reconstruction network is established as the generator, employing a combination of a temporal attention unit (TAU) and an encoder–decoder framework. Within this architecture, both intra-frame static attention and inter-frame dynamic attention mechanisms are utilized to derive attention weights across image channels, thereby effectively capturing the temporal evolution of time series images. This approach enhances the network’s capacity to comprehend local spatial features alongside global temporal dynamics. The encoder–decoder configuration further bolsters the network’s proficiency in feature extraction through image reconstruction. Subsequently, the spatiotemporal dual discriminator is crafted to encapsulate both temporal correlations and spatial attributes within the generated image sequences. This design serves to effectively steer the generator’s output, thereby augmenting the realism of the produced images. Lastly, a composite multi-loss function is proposed to enhance the network’s capability to model intricate spatiotemporal evolving radar echo data, facilitating a more comprehensive assessment of the quality of the generated images, which in turn fortifies the network’s robustness. Experimental findings derived from the standard radar echo dataset (SRAD) reveal that the proposed radar echo extrapolation technique exhibits superior performance, with average critical success index (CSI) and probability of detection (POD) metrics per frame increasing by 6.9% and 7.6%, respectively, in comparison to prior methodologies.
Read full abstract