Abstract

Multivariate Time Series Classification (MTSC) is believed to be a crucial task towards dynamic process recognition and has been widely studied. Recent years, end-to-end MTSC with Convolutional Neural Network (CNN) has gained increasing attention thanks to its ability to integrates local features. However, it remains a significant challenge for CNN to handle global information and long-range dependencies of time series. In this paper, we present a simple and feasible architecture for MTSC to address these problems. Our model benefits from self-attention, which can help CNN directly capture the relationships of time series between two random time steps or variables. Experimental results of the proposed model work on thirty five complex MTSC tasks show its effectiveness and universality that has to outperform existing state-of-the-art (SOTA) model overall. Besides, our model is computationally efficient, and the parsing speed is six hours faster than the current model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call