Abstract

The popularity of neural networks is increasing day by day. Traditional machine learning solutions, such as image recognition, object detection, are being replaced by deep learning solutions because of their vigorous performance in computer vision. Despite their superior performance in these applications, neural networks are prone to adversarial attacks. The adversarial attack is the process of using adversarial samples as an input to the neural network which causes the network to misclassify, eventually degrading overall performance. Thus, it becomes very important to maintain their robustness by identifying, analyzing, and eliminating the cause of their vulnerability. In this paper, we introduce a technique to determine the most sensitive frequency band of input samples and filter the noise from this band to shield the network against adversarial attacks. First, we decompose the input sample into four different frequency components and then, identify the sensitive component by measuring the change in behavior of the pre-trained network on normal frequency band and that on frequency band with added noise (frequency band of an adversary). Next, we exploit this vulnerable component to assist the network in tackling the adversaries through noise filtering. Thereby, enhancing the neural networks? performance and defending against the adversarial attack. The low-frequency component was the most vulnerable and mitigating the noise from this band significantly improved the accuracy of Convolutional Neural Networks (CNN) along with that of state-of-art networks against adversarial attacks such as Fast Gradient Sign Method (FGSM), DeepFool (DF), and other techniques. The proposed technique showed performance enhancement from 85% to 95% classification accuracy for ResNet50.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.