Abstract

The success of deep learning is greatly attributed to its representation capability especially in computer vision tasks. However, recent studies have shown that deep neural networks (DNNs) are often vulnerable to adversarial attacks. To determine the common ground of various attacks, we compare the difference between clean and adversarial examples via the model hidden feature visualization method, i.e. heatmap, as adversarial perturbations are usually imperceptible for human visual systems. It was observed that the adversarial examples generated by various attack methods are capable of fooling DNNs by scattering critical areas of the image and blurring object contours. Inspired by these findings, we created a direct but effective defense by Refocusing on Critical Areas and Strengthening Object Contours, briefly RCA-SOC. It is a pixel attention weight-based defense composed of a pixel channel attention and a pixel plane attention. Critical areas of the images can be reconstructed by the pixel channel attention, while object contour is strengthened by the pixel plane attention. The effect of RCA-SOC against different attacks were demonstrated on scalable models and datasets. Furthermore, current state-of-the-art defense methods were shown to improve when cascaded with RCA-SOC. To demonstrate its practical application, RCA-SOC also showed its effectiveness in a case study of not suitable for work (NSFW) recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call