Abstract

Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method called Adaptive Normalized Attack (ANA) to attack power systems using generate adversarial examples. We then adopt adversarial training to defend against attacks of adversarial examples. Experimental analyses demonstrate that our attack method provides less perturbation compared to the state-of-the-art FGSM (Fast Gradient Sign Method) and DeepFool, while our proposed method increases misclassification rate of learning methods for attacking power systems. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples compared to using state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.