Abstract

Time series classification is an essential task in many real-world application domains. As a popular deep learning network, convolutional neural networks have achieved excellent performance in time series classification tasks. The filters of the convolutional neural networks are fixed length and shared by each sample. However, each time series usually has different time scale features. Therefore, convolutional neural networks are not capable of extracting multi-scale features for each sample flexibly. In this paper, we propose dynamic multi-scale convolutional neural network to extract multi-scale feature representations existing in each time series dynamically. Specifically, we design a variable-length filters generator to produce a set of variable-length filters conditioned on the input time series. To make model differentiable, we use the learnable soft masks to control the lengths of variable-length filters. Therefore, the feature representations of different time scales can be captured through the variable-length filters. Then, the max-over-time pooling is used to select the most discriminative local patterns. Finally, the fully connected layer with softmax output is employed to calculate the final probability distribution for each class. Experiments conducted on extensive time series datasets show that our approach can improve the performance of time series classification through the learning of variable-length filters. Furthermore, we demonstrate the effectiveness of dynamically learning variable-length filters for each sample through the visualization analysis.

Highlights

  • According to the number of variables in time series data, Time series classification (TSC) tasks can be categorized into two types: univariate time series classification (UTSC) and multivariate time series classification (MTSC)

  • To address the above issues, in this paper, we propose a novel end-to-end variable-length filter learning model called Dynamic Multi-Scale Convolutional Neural Network (DMS-convolutional neural networks (CNNs)), for TSC tasks

  • DMS-CNN is significantly better than the baseline model, which further verifies the effectiveness of the variable-length filters

Read more

Summary

INTRODUCTION

To address the above issues, in this paper, we propose a novel end-to-end variable-length filter learning model called Dynamic Multi-Scale Convolutional Neural Network (DMS-CNN), for TSC tasks. Experiments conducted on 85 UCR time series datasets show that DMS-CNN can improve the performance of TSC tasks, and visualization analysis further demonstrates the effectiveness of the dynamically learning variable-length filters. CNNs use filters of multiple lengths to extract multi-scale temporal features of time series, they cannot adaptively learn variable-length filters conditioned on the input time series. The input time series is fed into the convolutional layer with variable-length filters to capture the multi-scale temporal features of the time series. The fully connected layer with softmax output is used to calculate the final probability distribution for each class

VARIABLE-LENGTH FILTERS GENERATOR
CONVOLUTIONAL LAYER WITH VARIABLE-LENGTH FILTERS
TRAINING
2: Obtain the subsequences S of T with a sliding window length l and stride of 1
EXPERIMENTS
Findings
CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call