Abstract
Long sequence time-series forecasting (LSTF) problems are widespread in the real world, such as weather forecasting, stock market forecasting, and power resource management. LSTF demands the model to have a high prediction accuracy. Recent studies have shown that transformers have the potential to improve predictive accuracy. However, we found that Transformer still has severe problems preventing it from directly applying to LSTF, such as redundant input information, which makes it difficult to provide accurate predictions. In order to solve this problem, this paper proposes an efficient transformer-based predictive model called Muformer. The model includes (1) an input multiple perceptual domain (MPD) processing mechanism, which can process a single input data into N outputs of different perceptual domains, thereby playing a role in feature enhancement; (2) a multi-granularity attention head mechanism that can cooperate with the MPD mechanism: the N outputs of MPD are input into different attention heads so that the head information can be fully utilized to reduce the generation of redundant information; and (3) an attention head pruning mechanism, which prunes similar redundant information as that handled by multi-head attention, thereby reducing redundant head information and enhancing model expression. Extensive experimental results obtained on five large-scale datasets show that our approach significantly outperforms existing state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.