Abstract

Currently, deep learning models have gained significant popularity in time series forecasting within industrial systems due to their high accuracy. However, these models exhibit vulnerability to adversarial attacks, posing significant cost and security risks. Existing attack methods for time series, primarily adapted from those developed for image classifiers, fail to effectively explore the vulnerability of time series forecasting models, since they overlook the distinct characteristics and temporal patterns inherent in time series data. To address this challenge and inspire future research aimed at improving the reliability of time series forecasting models, we identify the goals of adversarial attacks for time series forecasting and propose a novel white-box adversarial attack method named TCA. Specifically, TCA exploits gradient information from the target model, iteratively applies perturbations to the original samples, and constrains these perturbations based on the temporal characteristics. Extensive experiments on multiple DL models and real-world time series datasets reveal the shortcomings of existing attacks for time series forecasting and demonstrate the effectiveness, stealthiness, and rationality of TCA attacks in both untargeted and targeted attack scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.