Abstract

Face anti-spoofing, a biometric authentication method, is a central part of automatic face recognition. Recently, two sets of approaches have performed particularly well against presentation attacks: 1) pixel-wise supervision-based methods, which intend to provide fine-grained pixel information to learn specific auxiliary maps; and 2) anomaly detection-based methods, which regard face anti-spoofing as an open-set training task and learn spoof detectors using only bona fide data, where the detectors are shown to generalize well to unknown attacks. However, these approaches depend on handcrafted prior information to control the generation of intermediate difference maps and easily fall into local optima. In this paper, we propose a novel frame-level face anti-spoofing method, Covered Style Mining-GAN (CSM-GAN), which converts face anti-spoofing detection into a style transfer process without any prior information. Specifically, CSM-GAN has four main components: the Covered Style Encoder (CSE), responsible for mining the difference map containing the photography style and discriminative clues; the Auxiliary Style Classifier (ASC), consisting of several stacked Difference Capture Blocks (DCB) responsible for distinguishing bona fide faces from spoofing faces; and the Style Transfer Generator (STG) and Style Adversarial Discriminator (SAD), which form generative adversarial networks to achieve style transfer. Comprehensive experiments on several benchmark datasets show that the proposed method not only outperforms current state-of-the-art but also produces better visual diversity in difference maps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call