Abstract

Time-series data are widespread in real-world industrial scenarios. To recover and infer missing information in real-world applications, the problem of time-series prediction has been widely studied as a classical research topic in data mining. Deep learning architectures have been viewed as next-generation time-series prediction models. However, recent studies have shown that deep learning models are vulnerable to adversarial attacks. In this study, we prospectively examine the problem of time-series prediction adversarial attacks and propose an attack strategy for generating an adversarial time series by adding malicious perturbations to the original time series to deteriorate the performance of time-series prediction models. Specifically, a perturbation-based adversarial example generation algorithm is proposed using the gradient information of the prediction model. In practice, unlike the imperceptibility to humans in the field of image processing, time-series data are more sensitive to abnormal perturbations and there are more stringent requirements regarding the amount of perturbations. To address this challenge, we craft an adversarial time series based on the importance measurement to slightly perturb the original data. Based on comprehensive experiments conducted on real-world time-series datasets, we verify that the proposed adversarial attack methods not only effectively fool the target time-series prediction model LSTNet, they also attack state-of-the-art CNN-, RNN-, and MHANET-based models. Meanwhile, the results show that the proposed methods achieve a good transferability. That is, the adversarial examples generated for a specific prediction model can significantly affect the performance of the other methods. Moreover, through a comparison with existing adversarial attack approaches, we can see that much smaller perturbations are sufficient for the proposed importance-measurement based adversarial attack method. The methods described in this paper are significant in understanding the impact of adversarial attacks on a time-series prediction and promoting the robustness of such prediction technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call