Abstract
Although state estimation using a bad data detector (BDD) is a key procedure employed in power systems, the detector is vulnerable to false data injection attacks (FDIAs). Substantial deep learning methods have been proposed to detect such attacks. However, deep neural networks are susceptible to adversarial attacks or adversarial examples, where slight changes in inputs may lead to sharp changes in the corresponding outputs in even well-trained networks. This article introduces the joint adversarial example and FDIAs (AFDIAs) to explore various attack scenarios for state estimation in power systems. Considering that perturbations added directly to measurements are likely to be detected by BDDs, our proposed method of adding perturbations to state variables can guarantee that the attack is stealthy to BDDs. Then, malicious data that are stealthy to both BDDs and deep learning-based detectors can be generated. Theoretical and experimental results show that our proposed state-perturbation-based AFDIA method (S-AFDIA) can carry out attacks stealthy to both conventional BDDs and deep learning-based detectors, while our proposed measurement-perturbation-based adversarial FDIA method (M-AFDIA) succeeds if only deep learning-based detectors are used. The comparative experiments show that our proposed methods provide better performance than state-of-the-art methods. Besides, the ultimate effect of attacks can also be optimized using the proposed joint attack methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.