Abstract

In recent years, deep learning approaches have shown remarkable advancements in Multivariate Time Series Classification (MTSC) tasks. However, the existing approaches primarily focus on capturing the long-term correlations of time series or identifying local key sequence fragments, inevitably neglecting the synergistic properties between global and local components. Additionally, most representation learning methods for MTSC rely on self-supervised learning, which limits their ability to fully exploit label information. Hence, this paper proposes a novel approach termed Dual-Stream Encoder and Dual-Level Contrastive Learning Network (DSDCLNet), which integrates a Dual-Stream Encoder (DSE) and Dual-level Contrastive Learning (DCL). First, to extract multiscale local-global features from multivariate time series data, we employ a DSE architecture comprising an Attention-Gated Recurrent Unit (AGRU) and a Dual-layer Multiscale Convolutional Neural Network (DMSCNN). Specifically, DMSCNN consists of a series of multi-scale convolutional layers and a max pooling layer. Second, to maximize the utilization of label information, a new loss function is designed, which combines classification loss, instance-level contrastive loss, and temporal-level contrastive loss. Finally, experiments are conducted on the UEA datasets and the results demonstrate that DSDCLNet achieves the highest average accuracy of 0.77, outperforming traditional approaches, deep learning approaches, and self-supervised approaches on 30, 23, and 27 datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call