Abstract

In recent years, deep neural networks (DNNs) have made significant progress on face recognition. However, DNNs have been found to be vulnerable to adversarial examples, leading to fatal consequences in real-world applications. This paper focuses on improving the transferability of adversarial examples against face recognition models. We propose Gradient Eroding (GE) to make the gradient of the residual blocks more diverse, by eroding the back-propagation dynamically. We also propose a novel black-box adversarial attack named Corrasion Attack based on GE. Extensive experiments demonstrate that our approach can effectively improve the transferability of adversarial attacks against face recognition models. Our approach over-performs 29.35% in fooling rate than state-of-the-art black-box attacks. Leveraging adversarial training with adversarial examples generated by us, the robustness of models can be improved by up to 43.2%. Besides, Corrasion Attack successfully breaks two online face recognition systems, achieving a highest fooling rate of 89.8%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call