Abstract

The emergence of adversarial examples has aroused widespread attention to the safety of deep learning. Most recent research focuses on how to obtain adversarial examples which make networks’ predictions wrong, and rarely observe the changes in feature embedding space from the perspective of interpretability. In addition, researchers have proposed various attack algorithms for a single task, but there are few general methods that can perform multiple tasks at the same time, such as image classification, object detection, and face recognition. To resolve these issues, we propose a new attack algorithm CAMA for deep neural networks (DNNs). CAMA perturbs each feature extraction layer through adaptive feature measurement function, thereby disrupting the predicted class activation mapping of DNNs. Experiments show that CAMA is good at creating white-box adversarial examples on classification networks and has the highest attack success rate. To solve the problem of the disappearance of aggression caused by image transformation, we propose spread-spectrum compression CAMA, which achieve a better attack success rate under various defensive measures. In addition, we successfully attack face recognition networks and object detection networks using CAMA, and achieve excellent performance. It verifies that our algorithm is a general attack algorithm for attacking different tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.