Abstract
Adversarial attacks represent a serious evolving threat to the operation of deep neural networks. Recently, adversarial algorithms were developed to facilitate hallucination of deep neural networks for ordinary attackers. State-of-the-arts algorithms could generate offline printable adversarial patches that can be interspersed within fields of view of the capturing cameras in an innocently unnoticeable action. In this paper, we propose an algorithm to ravage the operation of these adversarial patches. The proposed algorithm uses intrinsic information contents of the input image to extract a set of ally patches. The extracted patches break the salience of the attacking adversarial patch to the network. To our knowledge, this is the first time to address the defense problem against such kinds of adversarial attacks by counter-processing the input image in order to ravage the effect of any possible adversarial patches. The classification decision is taken according to a late-fusion strategy applied to the independent classifications generated by the extracted patch alliance. Evaluation experiments were conducted on the 1000 classes of the ILSVRC benchmark. Different convolutional neural network models and varying-scale adversarial patches were used in the experimentation. Evaluation results showed the effectiveness of the proposed ally patches in reducing the success rates of adversarial patches.
Highlights
Vision-based intelligent systems and applications have been increasing rapidly
For evaluation purposes, we investigated the effect of ally patches on the main performance metric of adversarial patches, which is the success rate
We presented ally patches as a defense approach against attacks to deep neural networks, which are caused by adversarial patches
Summary
Vision-based intelligent systems and applications have been increasing rapidly. The continuous growth of dependence on automated systems is a double-edge sword. Intelligent systems make human daily life easier and more comfortable. On the other negative side, these systems are vulnerable to manipulation by attackers, either humans or software robot agents. The consequences of successful attacks have diverse degrees of criticality depending on the nature of the underlying application. The consequent troubles may vary from just unpleasant inconvenience in applications like entertainment image and video annotation, passing by security-critical problems like false person identifications, and can turn out to be life-threatening in autonomous navigation and driver support systems
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have