Abstract

Adversarial machine learning (AML) studies how to fool a machine learning (ML) model with malicious inputs degrade the ML method’s performance. Within AML, evasion attacks are an attack category that involves manipulation of input data during the testing phase to induce a misclassification of the data input by the ML model. Such manipulated data inputs that are called, adversarial examples. In this paper, we propose a generative approach for crafting evasion attacks against three ML learning based security classifiers. The proof of concept application for the ML based security classifier is the classification of compromised smart meters launching false data injection. Our proposed solution is validated with a real smart metering dataset. We found degradation in compromised meter detection performance under our proposed generative evasion attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call