Abstract

Adversarial example attacks have become a growing menace to neural network-based face recognition systems. Generated by composing facial images with pixel-level perturbations, adversarial examples change key features of inputs and thereby lead to misclassification of neural networks. However, the perturbation loss caused by complex physical environments sometimes prevents existing attack methods from taking effect. In this paper, we focus on designing new attacks that are effective and inconspicuous in the physical world. Motivated by the differences in image-forming principles between cameras and human eyes, we propose VLA, a novel attack against black-box face recognition systems using visible light. In VLA, visible light-based adversarial perturbations are crafted and projected on human faces, which allows an adversary to conduct targeted or un-targeted attacks. VLA decomposes adversarial perturbations into a perturbation frame and a concealing frame, where the former adds modifications on human facial images while the latter makes these modifications inconspicuous to human eyes. We conduct extensive experiments to demonstrate the effectiveness, inconspicuousness, and robustness of the adversarial examples crafted by VLA in physical scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call