Abstract

Based on machine learning (ML) technique, the data-driven power system dynamic security assessment (DSA) has received significant research interest. Yet, the well-trained ML-based models with high training and testing accuracy may be vulnerable to the adversarial example, which is a modified version of the original sample that is intentionally perturbed but retains being very close to the original one. Such adversarial examples can mislead the DSA results and lead to catastrophic consequences. Thus, the accuracy index alone is not enough to represent the performance of the ML-based DSA models. To evaluate the ML-based DSA models and provide formal robustness guarantee for real-time DSA, this article proposes an adversarial robustness verification method to quantify the ability of ML-based DSA models against all kinds of adversarial examples. A model-free and attack-independent robust index is defined for both differentiable and nondifferentiable attack scenarios. Simulation results have verified the effectiveness of the proposed adversarial robustness verification method and the superiority of robust index compared with the upper bound of the adversarial perturbations computed by existing adversarial attack methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call