Abstract
In recent years, sensor multimodal time series data fusion has attracted widespread attention. One of the key challenges is to extract features from multimodal data to obtain shared representations, which combines the characteristics of time series data to further improve prediction performance. To solve this problem, based on Stacked Sparse Auto-Encoder (SSAE) and Long Short-Term Memory (LSTM), we propose a multimodal time series data fusion model called SSAE-LSTM. SSAE mines the inherent correlation features of multimodal data to extract a good shared representation, which is used as the input of the LSTM neural network to perform data fusion processing. Experiments on real time series datasets demonstrate that SSAE-LSTM can obtain a good shared representation of multimodal data to predict the future development trend. Compared with other neural networks, SSAE-LSTM has better performance in Precision, Accuracy, Recall, F-score and so on.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.