Abstract
Face anti-spoofing, as a security measure for face verification and recognition system, could distinguish between genuine and fake faces. Although some impressive results have been achieved by CNN when evaluated on intra-tests (i.e. the model is trained and tested on the same dataset). Unfortunately, most of these models fail to generalize well to unseen attacks (e.g. when the model is trained on one dataset and then evaluated on another dataset). This is a major concern in face anti-spoofing research which is mostly overlooked. Our experiments on two challenging benchmark face spoofing datasets, CASIA and Replay-Attack, demonstrate the disappointed adaptation ability for CNN from one dataset to another. Via visualizing the implicit attention of CNN, we find that scene-dependent features extracted by CNN affect models generalization capabilities. To address this problem, we propose a novel solution based on applying scene-independent features representation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.