Abstract

One primary challenge in face anti-spoofing refers to suffering a sharp performance drop in cross-domain scenes, where training and testing images are collected from different datasets. Recent methods have achieved promising results by aligning the features of all images among the available source domains. However, due to significant distribution discrepancies among non-face regions of all images, it is challenging to capture domain-invariant features for these regions. In this paper, we propose a novel Selective Domain-invariant Feature Alignment Network (SDFANet) for cross-domain face anti-spoofing, which aims to seek common feature representations by fully exploring the generalization of different regions of images. Different from previous works that align the whole features directly, the proposed SDFANet leverages multiple domain discriminators with the same architecture to balance the generalization of different regions of the all images. Specifically, we firstly design a multi-grained feature alignment network composed of a local-region and global-image alignment subnetworks to learn more generalized feature space for real faces. Besides, the domain adapter module, which aims to alleviate the large domain discrepancy with the help of the domain attention strategy, is adopted to facilitate the learning of our multi-grained feature alignment network. In addition, a multi-scale attention fusion module is designed in our feature generator to refine the different levels of features effectively. Experimental results show that the proposed SDFANet can greatly improve the generalization ability of face anti-spoofing, and that is superior to the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call