Abstract

The network traffic prediction (NTP) model can help operators predict, adjust, and control network usage more accurately. Meanwhile, it also reduces network congestion and improves the quality of the user service experience. However, the characteristics of network traffic data are quite complex. NTP models with higher prediction accuracy tend to have higher complexity, which shows obvious asymmetry. In this work, we target the conflict between low complexity and high prediction performance and propose an NTP model based on a sparse persistent memory (SPM) attention mechanism. SPM can accurately capture the sparse key features of network traffic and reduce the complexity of the self-attention layer while ensuring prediction performance. The symmetric SPM encoder and decoder replace the high complexity feed-forward sub-layer with an attention layer to reduce the complexity. In addition, by adding an attention layer to persistently memorize key features, the prediction performance of the model could be further improved. We evaluate our method on two real-world network traffic datasets. The results demonstrate that the SPM-based method outperforms the state-of-the-art (SOTA) approaches in NTP results by 33.0% and 21.3%, respectively. Meanwhile, the results of RMSE and R2 are also optimal. When measured by temporal performance, SPM reduces the complexity and reduces the training time by 22.2% and 30.4%, respectively, over Transformer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call