Abstract
Fuzzy cognitive map (FCM) has been successfully applied to time series prediction due to its powerful dynamic system modeling and inference ability. Although many FCM-based methods have been proposed, their performance is far from satisfactory. The existing FCM-based methods have two limitations: First, the feature extraction of some methods is unreasonable and even leads to overfitting. Second, most methods ignore the local temporal features of time series. In this work, we propose a novel framework for time series prediction based on long short-term memory (LSTM) and high-order FCMs (HFCM) with an attention mechanism, termed LSTM-HFCMAM. To overcome the first limitation, different from other FCM-based methods that use Autoencoder to forcibly decompose each point of the time series into multiple points to represent the original sequences, we use a sliding window to preprocess the original time series and use the Encoder-Decoder framework to represent the sequences. This way makes the model has better generalization and interpretability. Then, HFCM is used to predict representations of sequences due to its powerful causal inference ability. Finally, self-attention is applied to restore the predicted sliding window data with focus, effectively improving the performance. To overcome the second limitation, we use the LSTM to learn the temporal features of time series fragments, thereby learning the local features of the whole time series. We validate the performance of LSTM-HFCMAM on twelve benchmark datasets. Compared with current methods, LSTM-HFCMAM has a maximum improvement of 51.02%. The experimental results demonstrate the effectiveness of LSTM-HFCMAM and overcome the above limitations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.