Abstract

Cybersecurity has become a great concern in many real-world applications involving adversaries with Machine Learning (ML) algorithms being more widely used. This concern is more challenging in Internet of Things (IoT) platforms. As IoT-enabled applications are growing at a rapid pace in every sector there are growing security related incidents as well. ML algorithms are widely deployed to perform data analysis, reasoning and decision-making over the data emanating from IoT devices. Security of this data while collection, communication and computing is a major challenge. Various attackers are trying to find weaknesses in the ML algorithms and trying to deceive these ML algorithms to learn the wrong information from the data. Countermeasures need to be developed to evaluate the security of the ML models. To develop such countermeasures, one needs to understand all possible attacks on the ML models. Data poisoning attacks are a class of adversarial attacks on ML where an adversary has the power to alter a small fraction of the training data in order to make the trained classifier to satisfy certain objectives. In order to develop an attack-resistant ML model, one needs to know all possible attacks on these models. The recent data poisoning technique such as the Fast Gradient Sign Method (FGSM) is static and provides very micro control to the attacker on creating adversarial data. In this research, we develop a more robust data poisoning technique for deep neural networks using Generative Adversarial Networks (GANs) to create a data poisoning attack. We then evaluate the performance of the proposed algorithm and also compare the results obtained by FGSM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call