Abstract

Voltage stability assessment is essential for maintaining reliable power grid operations. Stability assessment approaches using deep learning address the shortfalls of the traditional time-domain simulation-based approaches caused by increased system complexity. However, deep learning models are shown to be vulnerable to adversarial examples in the field of computer vision. While this vulnerability has been noticed by the power grid cybersecurity research, the domain-specific analysis on the requirements imposed upon effective attack implementation is still lacking. Although these attack requirements are usually reasonable in computer vision tasks, they can be stringent in the context of power grids. In this paper, we conduct a systematic investigation on the attack requirements and credibility of six representative adversarial example attacks based on a voltage stability assessment application for the New England 10-machine 39-bus power system. We show that (1) compromising about half the transmission system buses' voltage traces is a rule-of-thumb attack requirement; (2) the universal adversarial perturbations regardless of the original clean voltage trajectory possess the same credibility as the widely studied false data injection attacks on power grid state estimation, while the input-specific adversarial perturbations are less credible; (3) the prevailing strong adversarial training thwarts the universal perturbations but fails in defending certain input-specific perturbations. To advance defense to cope with both universal and input-specific adversarial examples, we propose a new approach that simultaneously estimates the predictive uncertainty of any given input of voltage trajectory and thwarts the attacks effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call