AbstractThe spatiotemporal distribution of Total Electron Content (TEC) in ionosphere determines the refractive index of electromagnetic wave leading to the radio signal scintillation and deterioration. Thanks to the development of machine learning for video prediction, spatiotemporal predictive models are applied on the future TEC map prediction based on the graphic features of past frames. However, output result of graphic prediction is unable to properly respond to the external factor variations such as solar or geomagnetic activity. Meanwhile, there is still neither standard data ‐set nor comprehensive evaluation framework for spatiotemporal predictive learning of TEC map sequences leading to the comparisons unfair and insights inconclusive. In this research, a new feature‐level multimodal fusion method named as channel mixer layer for machine reasoning is proposed that can be embedded into the existing advanced spatiotemporal sequence prediction models. Meanwhile, all performance benchmarks are accomplished on the same running environment and newly proposed largest scale data set. Experiment results suggest that the multimodal fusion prediction of existing model backbones by proposed method improves the prediction accuracy up to 15% with almost the same computational complexity compared to that of graphic prediction without auxiliary factors input, having the real‐time inference speed of 34 frames/second and minimum mean absolute error of 0.94/2.63 TEC unit during low/high solar activity period respectively. The channel mixer layer embedded models can respond to the variations of auxiliary external factors more correctly than previous multimodal fusion methods such as concatenation and arithmetic, which is regarded as the evidence of state‐of‐the‐art machine reasoning ability.
Read full abstract