Abstract

Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model to attack various time series classification models. The proposed attack on the classification model utilizes a distilled model as a surrogate that mimics the behavior of the attacked classical time series classification models. Our proposed methodology is applied onto 1-nearest neighbor dynamic time warping (1-NN DTW) and a fully convolutional network (FCN), all of which are trained on 42 University of California Riverside (UCR) datasets. In this paper, we show both models were susceptible to attacks on all 42 datasets. When compared to Fast Gradient Sign Method, the proposed attack generates a larger faction of successful adversarial black-box attacks. A simple defense mechanism is successfully devised to reduce the fraction of successful adversarial samples. Finally, we recommend future researchers that develop time series classification models to incorporating adversarial data samples into their training data sets to improve resilience on adversarial samples.

Highlights

  • Over the past decade, machine learning and deep learning have been powering several aspects of society. [1] Machine learning and deep learning are being used in some areas such as web searches [2], recommendation systems [3], and wearables [4]

  • We propose a proxy attack strategy on a target classifier via a student model, trained using standard model distillation techniques to mimic the behavior of the target classical time series classification models

  • We apply our methodologies onto 1-Nearest Neighbor Dynamic Time Warping (1-NN) DTW, Fully Connected Network and Fully Convolutional Network (FCN) that are trained on 42 University of California Riverside (UCR) datasets

Read more

Summary

Introduction

Machine learning and deep learning have been powering several aspects of society. [1] Machine learning and deep learning are being used in some areas such as web searches [2], recommendation systems [3], and wearables [4]. With the advent of smart sensors, advancements in data collection and storage at vast scales, ease of data analytics and predictive modelling, time series data being collected from various sensors can be analyzed to determine regular patterns that are interpretable and exploitable Classifying these time series data has been an area of interest by several researchers [5]–[8]. Utilizing a variety of adversarial attacks, complex models are tricked to incorrectly predict the wrong class label This is a serious security issue in neural networks widely used for vision-based tasks where adding slight perturbations or carefully crafted noise on an input image can mislead the image classification algorithm to make highly confident, yet wildly inaccurate predictions.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call