Abstract

Although supervised face anti-spoofing (FAS) methods have shown remarkable performance, they suffer from poor generalization to unseen attacks. Many existing methods employed domain adaptation (DA) or domain generalization (DG) techniques to reduce domain variations. However, previous works have yet to fully explore domain-specific style information within intermediate layers that can give knowledge about face attack styles (e.g., illumination, backgrounds, and materials). In this paper, we present a new framework, Meta Style Selective Normalization (MetaSSN) for test-time domain adaptive FAS. Specifically, we propose style selective normalization (SSN) that statistically estimates the domain-specific image style of individual domains. SSN facilitates adaptation of the network to the target image by selecting the optimal normalization parameters to reduce style discrepancy between source and target domain. Furthermore, we meticulously design the training strategy in a meta-learning pipeline to simulate test-time adaptation using the style selection process with virtual test domain, which can boost the adaptation capability. In contrast to the previous DA approaches, our framework is more practical since it does not necessitate additional auxiliary networks (e.g., domain adaptors) during training. To validate our method, we utilized public FAS datasets: CASIA-FASD, MSU-MFSD, Oulu NPU, and Idiap Replay Attack. In most assessments, our results demonstrate a significant gap in performance relative to conventional FAS methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call