Abstract

Due to the recent advancement of deep learning methodologies, face recognition has seen increased success. However, researchers found that adversarial attacks can be launched against deep learning-based systems. The face recognition systems can be tricked by small changes in an input image. The human visual system is incapable of detecting these changes. In this work, a black-box attack generation method is proposed to create a transferable patch attack (TPA) using the generative adversarial network (GAN) to deceive the current face recognition systems. The generator and discriminator networks are used in the TPA attack to generate the adversarial images. The Labeled Face in the Wild (LFW) dataset is used as input for the proposed TPA attack, which creates the image by adding noise in the form of a patch. These patches are added in such a manner that they are invisible to the naked eye. FaceNet, ArcFace and CosFace face recognition models are used for the attack testing and also the proposed attack is compared to existing attacks. The purpose of this paper is to suggest a black-box attack and increase the transferability of adversarial attacks. In the black-box attack, the attacker doesn’t know any information about the target model. The quantitative findings demonstrate that the proposed attack successfully misclassified the most recent FR models and achieved a higher attack success rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call