Abstract

Multivariate time series forecasting plays an increasingly critical role in various applications, such as power management, smart cities, finance, and healthcare. Recent advances in temporal graph neural networks (GNNs) have shown promising results in multivariate time series forecasting due to their ability to characterize high-dimensional nonlinear correlations and temporal patterns. However, the vulnerability of deep neural networks (DNNs) constitutes serious concerns about using these models to make decisions in real-world applications. Currently, how to defend multivariate forecasting models, especially temporal GNNs, is overlooked. The existing adversarial defense studies are mostly in static and single-instance classification domains, which cannot apply to forecasting due to the generalization challenge and the contradiction issue. To bridge this gap, we propose an adversarial danger identification method for temporally dynamic graphs to effectively protect GNN-based forecasting models. Our method consists of three steps: 1) a hybrid GNN-based classifier to identify dangerous times; 2) approximate linear error propagation to identify the dangerous variates based on the high-dimensional linearity of DNNs; and 3) a scatter filter controlled by the two identification processes to reform time series with reduced feature erasure. Our experiments, including four adversarial attack methods and four state-of-the-art forecasting models, demonstrate the effectiveness of the proposed method in defending forecasting models against adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call