Abstract

• Classifying time series with Euclidean distance is robust against adversarial attacks. • Time series classifiers using fixed kernels are robust against adversarial attacks. • This is empirically proven on 85 datasets for 2 state-of-the-art adversarial attacks. Deep neural networks have been shown to be vulnerable against specifically-crafted perturbations designed to affect their predictive performance. Such perturbations, formally termed ‘adversarial attacks’ have been designed for various domains in the literature, most prominently in computer vision and more recently, in time series classification. Therefore there is a need to derive robust strategies to defend deep networks from such attacks. In this work we propose to establish axioms of robustness against adversarial attacks in time series classification. We subsequently design a suitable experimental methodology and empirically validate the hypotheses put forth. Results obtained from our investigations confirm the proposed hypotheses, and provide a strong empirical baseline with a view to mitigating the effects of adversarial attacks in deep time series classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call