Abstract

Based on machine learning (ML) technique, the datadriven power system stability assessment (SA) has received significant research interests in recent years. However, even with a high SA accuracy performance, the data-driven SA models may be vulnerable to adversarial examples (caused by some physical noises or adversarial attacks), which are very close to the original input but can result in a wrong SA result. To solve such threat, this paper firstly proposes a universal defense strategy for the MLbased SA models based on randomized smoothing algorithm to resist the adversarial attacks. Secondly, this paper proposes an effectiveness index for the proposed universal strategy to quantify the maximum ability of resistance to adversarial examples. Moreover, this paper provides the tight mathematical proof for the effectiveness index under the hard smoothing, soft smoothing, and binary scenarios. Simulation results verify that the proposed defense strategy can effectively resist the adversarial examples and the proposed effectiveness index can provide formal robustness guarantee for real-time power system SA applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call