Abstract

In practical applications, the generalization capability of face anti-spoofing (FAS) models on unseen domains is of paramount importance to adapt to diverse camera sensors, device drift, environmental variation, and unpredictable attack types. Recently, various domain generalization (DG) methods have been developed to improve the generalization capability of FAS models via training on multiple source domains. These DG methods commonly require collecting sufficient real-world attack samples of different attack types for each source domain. This work aims to learn a FAS model without using any real-world attack sample in any source domain but can generalize well to the unseen domain, which can significantly reduce the learning cost. Toward this goal, we draw inspiration from the theoretical error bound of domain generalization to use negative data augmentation instead of real-world attack samples for training. We show that using only a few types of simple synthesized negative samples, e.g., color jitter and color mask, the learned model can achieve competitive performance over state-of-the-art DG methods trained using real-world attack samples. Moreover, a dynamic global common loss and a local contrast loss are proposed to prompt the model to learn a compact and common feature representation for real face samples from different source domains, which can further improve the generalization capability. Experimental results of extensive cross-dataset testing demonstrate that our method can even outperform state-of-the-art DG methods using real-world attack samples for training. The code for reproducing the results of our method is available at https://github.com/WeihangWANG/NDA-FAS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.