Abstract

Time series self-supervised learning is gaining increasing importance owing to its powerful representation ability in unlabeled situations. One of the greatest challenges is extracting generalizable representations from time series without a label. The existing research primarily uses time-based augmentation approaches in conjunction with contrastive learning frameworks to generate positive and negative samples. However, most studies derive time series representations only from the time domain. This often involves segment-level augmentation obtained from time slices, relying on data augmentation methods, which may introduce sampling bias and result in misoptimization due to the loss of global context. Thus, we propose a novel framework called TS-TFSIAM, tailored to self-supervised time series learning. This framework incorporates time and frequency domain information of time series without relying on data augmentation. First, we use a time domain encoder and a frequency domain encoder instead of the traditional data augmentation module to transform the raw time series into two different yet correlated views. Next, we introduce a new time–frequency contrasting module to obtain generalizable representations by developing a difficult prediction task across time and frequency domains. Finally, we suggest a context contextual contrasting based on the time–frequency contrasting module to facilitate learning high-quality representations. We perform a variety of experiments on six benchmark datasets and design careful ablation studies to demonstrate the effectiveness of TS-TFSIAM. Our experiments illustrate that the TS-TFSIAM method can considerably outperform previous competitive baselines. Moreover, we conclude that the above representations are universal in few-labeled data and transfer learning experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call