Abstract

Face detection is a classical problem in the field of computer vision. It has significant application value in face recognition and face recognition related applications such as face-scan payment, identity authentication, and other areas. The emergence of adversarial algorithms on face detection poses a substantial threat to the security of face recognition. The current adversarial attacks on face detection have the limitations of the need to fully understand the attacked face detection model’s structure and parameters. Therefore, these methods’ transferability, which can measure the attack’s effectiveness across many other models, is not high. Moreover, due to the consideration of commercial confidentiality, commercial face detection models deployed in real-world applications cannot be accessed, so we cannot directly launch white-box adversarial attacks against these models. Aiming at solving the above problems, we propose a Black-Box Physical Attack Method on face detection. Through ensemble learning, we can extract the public weakness of the face detection models. The attack against the public weakness has high transferability across models and makes escaping black-box face detection models possible. Our method realizes the successful escape of both the white-box and black-box face detection models in both the PC terminal and the mobile terminal, including the camera module, mobile payment module, selfie beauty module, and official face detection models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call