Abstract

The accuracy of face recognition systems can be negatively affected by facial cosmetics which have the ability to substantially alter the facial appearance. Recently, it was shown that makeup can also be abused to launch so-called makeup presentation attacks. In such attacks, an attacker might apply heavy makeup to achieve the facial appearance of a target subject for the purpose of impersonation.In this work, we assess the vulnerability of a widely used open-source face recognition system, i.e. ArcFace, to makeup presentation attacks using the publicly available Makeup Induced Face Spoofing (MIFS) and FRGCv2 databases. It is shown that the success rate of makeup presentation attacks in the MIFS database has negligible impact on the security of the face recognition system. Further, we employ image warping to simulate improved makeup presentation attacks which reveal a significantly higher success rate. Moreover, we propose a makeup attack detection scheme which compares face depth data with face depth reconstructions obtained from RGB images of potential makeup presentation attacks. Significant variations between the two sources of information indicate facial shape alterations induced by strong use of makeup, i.e. potential makeup presentation attacks. Conceptual experiments on the MIFS database confirm the soundness of the presented approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.