Abstract

Over the years, a number of semisupervised deep-learning algorithms have been proposed for time-series classification (TSC). In semisupervised deep learning, from the point of view of representation hierarchy, semantic information extracted from lower levels is the basis of that extracted from higher levels. The authors wonder if high-level semantic information extracted is also helpful for capturing low-level semantic information. This paper studies this problem and proposes a robust semisupervised model with self-distillation (SD) that simplifies existing semisupervised learning (SSL) techniques for TSC, called SelfMatch. SelfMatch hybridizes supervised learning, unsupervised learning, and SD. In unsupervised learning, SelfMatch applies pseudolabeling to feature extraction on labeled data. A weakly augmented sequence is used as a target to guide the prediction of a Timecut-augmented version of the same sequence. SD promotes the knowledge flow from higher to lower levels, guiding the extraction of low-level semantic information. This paper designs a feature extractor for TSC, called ResNet–LSTMaN, responsible for feature and relation extraction. The experimental results show that SelfMatch achieves excellent SSL performance on 35 widely adopted UCR2018 data sets, compared with a number of state-of-the-art semisupervised and supervised algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call