Abstract
The Face Anti-Spoofing (FAS) methods plays a very important role in ensuring the security of face recognition systems. The existing FAS methods perform well in short-distance scenarios, e.g., phone unlocking, face payment, etc. However, it is still challenging to improve the generalization of FAS in long-distance scenarios (e.g., surveillance) due to the varying image quality. In order to address the lack of low-quality images in real scenarios, we build a Low-Quality Face Anti-Spoofing Dataset (LQFA-D) by using Hikvision’s surveillance cameras. In order to deploy the model on an edge device with limited computation, we propose a lightweight FAS network based on MobileFaceNet, in which the Coordinate Attention (CA) attention model is introduced to capture the important spatial information. Then, we propose a multi-scale FAS framework for low-quality images to explore multi-scale features, which includes three multi-scale models. The experimental results of the LQFA-D show that the Average Classification Error Rate (ACER) and detection time of the proposed method are 1.39% and 45 ms per image for the low-quality images, respectively. It demonstrates the effectiveness of the proposed method in this paper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.