Abstract

The application of facial cosmetics may cause substantial alterations in the facial appearance, which can degrade the performance of facial biometrics systems. Additionally, it was recently demonstrated that makeup can be abused to launch so-called makeup presentation attacks. More precisely, an attacker might apply heavy makeup to obtain the facial appearance of a target subject with the aim of impersonation or to conceal their own identity. We provide a comprehensive survey of works related to the topic of makeup presentation attack detection, along with a critical discussion. Subsequently, we assess the vulnerability of a commercial off-the-shelf and an open-source face recognition system against makeup presentation attacks. Specifically, we focus on makeup presentation attacks with the aim of impersonation employing the publicly available Makeup Induced Face Spoofing (MIFS) and Disguised Faces in the Wild (DFW) databases. It is shown that makeup presentation attacks might seriously impact the security of face recognition systems. Further, we propose different image pair-based, i.e. differential, attack detection schemes which analyse differences in feature representations obtained from potential makeup presentation attacks and corresponding target face images. The proposed detection systems employ various types of feature extractors including texture descriptors, facial landmarks, and deep (face) representations. To distinguish makeup presentation attacks from genuine, i.e. bona fide presentations, machine learning-based classifiers are used. The classifiers are trained with a large number of synthetically generated makeup presentation attacks utilising a generative adversarial network for facial makeup transfer in conjunction with image warping. Experimental evaluations conducted using the MIFS database and a subset of the DFW database reveal that deep face representations achieve competitive detection equal error rates of 0.7% and 1.8%, respectively.

Highlights

  • B IOMETRIC recognition has quickly established itself as one of the most pertinent means of authenticating individuals in a reliable and fast manner by analysing their biological and/or behavioural characteristics [1], [2]

  • We assessed the vulnerability of a Commercial OffThe-Shelf (COTS) and an open-source face recognition system against M-presentation attacks (PAs)

  • Makeup PAs (M-PAs) based on a simple makeup style transfer have a rather low success rate

Read more

Summary

Introduction

B IOMETRIC recognition has quickly established itself as one of the most pertinent means of authenticating individuals in a reliable and fast manner by analysing their biological and/or behavioural characteristics [1], [2]. Potential attack vectors against biometric systems were first established in [3]. Due to the fact that many biometric characteristics are not secret, in particular the face, so-called presentation attacks (PAs) or “spoofing” attacks represent one of the most critical attack vectors against biometric systems [4]. Researchers showed how facial recognition systems introduced by three different laptop manufacturers could be circumvented with photos of legitimate users. This vulnerability has since been listed in the National Vulnerability Database of the National Institute of Standards and Technology (NIST) [7]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.