Abstract

Models related to long and short-term memory networks have demonstrated superior performance in short-term prediction, but their prediction ability becomes limited in long sequence time series forecasting (LSTF), and prediction time increases. To address these issues, this paper optimizes the Transformer and Informer models in the following ways: (1) input representation optimization, by adding a time embedding layer representing global timestamps and a positional embedding layer to improve the model's prediction ability for aerosol extinction coefficient (AEC); (2) self-attention mechanism optimization, by using probabilistic self-attention mechanism and self-attention distillation mechanism to reduce memory usage and enhance the model's local modeling ability through convolutional aggregation operations; (3) generative decoding, using dynamic decoding to enhance the model's long sequence prediction ability. Based on these optimizations, a new LSTF model for AEC is proposed in this paper. Experimental results on the atmosphere parameters of the Maoming (APM) dataset and weather dataset show that the proposed model has significant improvements in accuracy, memory usage, and runtime speed compared to other similar Transformer models. In the accuracy experiment, compared to the Transformer model, the MAE of this model on APM dataset decreased from 0.237 to 0.103, and the MSE decreased from 0.345 to 241. In the memory usage experiment, the model can effectively alleviate memory overflow problems when the input length is greater than 720. In the runtime speed experiment, when the input length is 672, the training time per round decreased from 15.32 seconds to 12.39 seconds. These experiments demonstrate the effectiveness and reliability of the proposed model, providing a new approach and method for long sequence prediction of AEC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call