Abstract
The network attack detection model based on machine learning (ML) has received extensive attention and research in PMU measurement data protection of power systems. However, well-trained ML-based detection models are vulnerable to adversarial attacks. By adding meticulously designed perturbations to the original data, the attacker can significantly decrease the accuracy and reliability of the model, causing the control center to receive unreliable PMU measurement data. This paper takes the network attack detection model in the power system as a case study to analyze the vulnerability of the ML-based detection model under adversarial attacks. And then, a mitigation strategy for adversarial attacks based on causal theory is proposed, which can enhance the robustness of the detection model under different adversarial attack scenarios. Unlike adversarial training, this mitigation strategy does not require adversarial samples to train models, saving computing resources. Furthermore, the strategy only needs a small amount of detection model information and can be migrated to various models. Simulation experiments on the IEEE node systems verify the threat of adversarial attacks against different ML-based detection models and the effectiveness of the proposed mitigation strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.