Abstract

Face anti-spoofing is a vital part to protect the security of face recognition systems. Many existing face anti-spoofing methods rely on convolutional neural networks (CNNs) and achieve competitive performance. However, due to the power of CNNs, these methods will extract information that is irrelevant to spoof patterns, such as acquisition equipment and environmental characteristics, which makes the network vulnerable to changes of the illumination or camera. In this work, we propose a plug-and-play module called DyAttention, which can improve the robustness against environmental changes. Moreover, we build a network named DANet with DyAttention, which can accurately capture the spoof patterns from coarse to fine. DANet can dynamically capture the texture differences between live and spoof samples in the facial area. Specifically, we use the spatial attention mechanism to generate a mask of the facial area. Then, we extract the intrinsic texture patterns and piecewise enhance them via dynamic activation for clean representation, where the texture patterns are not affected by the environmental and domain factors. Through experiments on three benchmark datasets, our DANet achieves state-of-the-art intra-dataset accuracy on CASIA-MFSD, Replay-Attack, and OULU-NPU. Meanwhile, DANet can enhance the cross-dataset performance between CASIA-MFSD and Replay-Attack, improving the average HTER by 1.3%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call