Abstract

Time-series representation learning is crucial for extracting meaningful representations from time-series data with temporal dynamics and sparse labels. Contrastive learning, a powerful technique for exploiting the inherent data patterns, has been applied to explore the diverse consistencies in time-series data, achieved through careful selection of contrastive pairs and design of appropriate loss. Encouraging such consistency is essential for acquiring comprehensive representations of time-series data. In this paper, we propose a new framework for time-series representation learning that combines the advantages of contextual, temporal, and transformation consistencies. This framework enables the network to learn general representations suitable for different tasks and domains. First, positive and negative pairs are generated to establish a multi-task learning setup. Then, contrastive losses are formulated to explore contextual, temporal, and transformation consistencies, which are jointly optimized to learn general time-series representations. In addition, we investigate an uncertainty weighting approach to enhance the effectiveness of multi-task learning. To evaluate the performance of our framework, we conduct experiments on three downstream tasks: time-series classification, forecasting, and anomaly detection. The experimental results demonstrate the superior performance of our framework compared to benchmark models across different tasks. Furthermore, our framework shows efficiency in cross-domain transfer learning scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call