Abstract

In recent years, the fields of Artificial Intelligence (AI) and Deep learning (DL) techniques along with Neural Networks (NNs) have shown great progress and scope for future research. Along with all the developments comes the threats and security vulnerabilities to Neural Networks and AI models. A few fabricated inputs/samples can lead to deviations in the results of the models. Patch based Adversarial Attacks can change the output of a neural network to a completely different result just by making a few changes to the input of the neural network. These attacks employ a patch that is applied to the input image in order to cause the classifier to misclassify and make the incorrect prediction. The goal of this research is to develop effective defense strategies against these types of attacks and make the model/Neural Network more robust.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.