Abstract

Neural networks are susceptible to adversarial samples: samples with imperceptible noise, crafted to manipulate network's prediction. In order to learn robust models, a training procedure, called Adversarial Training has been introduced. During adversarial training, models are trained with mini-batch containing adversarial samples. In order to scale adversarial training for large datasets and networks, fast and simple methods (e.g., FGSM:Fast Gradient Sign Method) of generating adversarial samples are used while training. It has been shown that models trained using single-step adversarial training methods (i.e., adversarial samples generated using non-iterative methods such as FGSM) are not robust, instead they learn to generate weaker adversaries by masking the gradients. In this work, we propose a regularization term in the training loss, to mitigate the effect of gradient masking during single-step adversarial training. The proposed regularization term causes training loss to increase when the distance between logits (i.e., pre-softmax output of a classifier) for FGSM and R-FGSM (small random noise is added to the clean sample before computing its FGSM sample) adversaries of a clean sample becomes large. The proposed single-step adversarial training is faster than computationally expensive state-of-the-art PGD adversarial training method, and also achieves on par results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call