Abstract

Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a signal-specific method and a universal signal-agnostic method to attack power systems using generated adversarial examples. Second, black-box attacks based on transferable characteristics and the above two methods are also proposed and evaluated. Third, adversarial training is adopted to defend systems against adversarial attacks. Experimental analyses demonstrate that the signal-specific attack method provides less perturbation compared to the FGSM (Fast Gradient Sign Method), and the signal-agnostic attack method can generate perturbations misclassifying most natural signals with high probability. Furthermore, the attack method based on the universal signal-agnostic algorithm has a higher transfer rate of black-box attacks than the attack method based on the signal-specific algorithm. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call