Abstract

As biometric authentication systems are popularly used in various mobile devices, e.g., smart-phones and tablets, face anti-spoofing methods have been actively developed for the high-level security. However, most previous approaches still suffer from diverse types of spoofing attacks, which are hardly covered by the limited number of training datasets, and thus they often show the poor accuracy when unseen samples are given for the test. To address this problem, a novel method for face anti-spoofing is proposed based on one-class (i.e., live face only) learning with the live correlation loss. Specifically, encoder-decoder networks are firstly trained with only live faces to extract latent features, which have an ability to compactly represent various live facial properties in the embedding space and produce the spoofing cues, which are simply obtained by subtracting the original RGB image and the generated one. After that, such features are fed into the proposed feature correlation network (FCN) so that weights of FCN learn to compute “liveness” of given features under the guidance of the live correlation loss. It is noteworthy that the proposed method only requires live facial images for training the model, which are easier to obtain than fake ones, and thus the generality power for resolving the problem of face anti-spoofing can be expected to be improved. Experimental results on various benchmark datasets demonstrate the efficiency and robustness of the proposed method.

Highlights

  • In keeping with the trend that various mobile devices become more widespread, authentication systems based on the biometric information, e.g., fingerprint, iris, face, etc., have begun to draw considerable attentions recently

  • To discriminate live faces from fake ones accurately, most early methods focused on finding appropriate features by adopting various image descriptors such as local binary patterns (LBP) [1], scale-invariant feature transform (SIFT) [2], histograms of oriented gradients (HOG) [3], and difference of Gaussians (DoG) [4], [5]

  • Even though the performance of the proposed method shows the low score compared to several methods (i.e., MADDoG [28], motion blur-based [29], and LGSC [30]), the proposed method achieved the competitive performance in experiments based on CASIA and Replay-Attack datasets, which are shown in Tables 3, 4, 5, and 6, and even provided the better performance for face anti-spoofing in some cases, e.g., video in Tables 4 and 6, cut photo in Table 5, printed photo in Table 6, etc

Read more

Summary

Introduction

In keeping with the trend that various mobile devices become more widespread, authentication systems based on the biometric information, e.g., fingerprint, iris, face, etc., have begun to draw considerable attentions recently. To discriminate live faces from fake ones accurately, most early methods focused on finding appropriate features by adopting various image descriptors such as local binary patterns (LBP) [1], scale-invariant feature transform (SIFT) [2], histograms of oriented gradients (HOG) [3], and difference of Gaussians (DoG) [4], [5]. Even though those descriptors have shown the satisfactory performance to grasp subtle differences of textural properties between live and fake faces, they often fail to produce the consistent result under illumination variations.

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.