Abstract
In securing the face verification systems, prior face anti-spoofing studies excavate hidden cues in original images to discriminate real persons and diverse attacks with the assistance of auxiliary supervision. However, limited by several inherent shortcomings in their training process: 1) Neglect of the multi-scale nature of spoof cues; 2) Complete integral structure in a single image; 3) Implicit subdomains in the whole dataset, these methods are weak to mine comprehensive spoof patterns and may stick on memorization of the entire training dataset, incurring overfitting. In this paper, we propose a new framework named Destruction and Combination Network (DCN) including Multi-scale Representation Extraction Module, Structure Destruction Module, and Content Combination Module to address these limitations respectively. The first mechanism exploits multi-scale representation to learn spoof cues comprehensively. The second one destroys images into patches to construct non-structural inputs, and the last scheme recombines patches from different subdomains or classes into a mixup construction. Based on the above splitting-and-splicing operation, we further introduce Local Relation Modeling Module to model the second-order relationship between patches. To show the generalizable capacity of the proposed framework, besides simple intra-dataset testing, we test our method in cross-domain, cross-content, and cross-attack scenarios. Extensive experiments on different scenarios demonstrate the reliability of our method against state-of-the-art competitors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Biometrics, Behavior, and Identity Science
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.