Abstract

For attack detection in the smart grid, transfer learning is a promising solution to tackle data distribution divergence and maintain performance when facing system and attack variations. However, there are still two challenges when introducing transfer learning into intrusion detection: when to apply transfer learning and how to extract effective features during transfer learning. To address these two challenges, this paper proposes a transferability analysis and domain-adversarial training (TADA) framework. The framework first leverages various data distribution divergence metrics to predict the accuracy drop of a trained model and decides whether one should trigger transfer learning to retain performance. Then, a domain-adversarial training model with CNN and LSTM is developed to extract the spatiotemporal domain-invariant features to reduce distribution divergence and improve detection performance. The TADA framework is evaluated in extensive experiments where false data injection (FDI) attacks are injected at different times and locations. Experiments results show that the framework has high accuracy in accuracy drop prediction, with an RMSE lower than 1.79%. Compared to the state-of-the-art models, TADA demonstrates the highest detection accuracy, achieving an average accuracy of 95.58%. Moreover, the robustness of the framework is validated under different attack data percentages, with an average F1-score of 92.02%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call