Abstract

The research aims to attack a Logistic Regression-based Machine Learning Model using the Evasion and Poison technique. An adversarial attack is a strategy to fool machine learning models with small perturbations. The Logistic Regression algorithm is the most commonly used Machine Learning algorithm for binary classification. We proposed the evasion and poison attack technique against Logistic Regression classification because of the significance and popularity of Logistic Regression. First, the Logistic Regression (LR) model is trained and tested by the MNIST handwritten digits dataset and gives better accuracy during training and testing time. After that, the Machine Learning model is attacked by the Evasion and Poisoning method using Projected Gradient Decent (PGD) technique against the trained Logistic Regression (LR) classifier model. Before the evasion attack, we found 93.40% accuracy in classifier testing, 8.00% accuracy in classifier testing after the evasion attack, 93.40% accuracy on the train set before the poison attack, and 80.00% accuracy on the train set after the poison attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call