Abstract

Most existing face authentication systems have limitations when facing the challenge raised by presentation attacks, which probably leads to some dangerous activities when using facial unlocking for smart device, facial access to control system, and face scan payment. Accordingly, as a security guarantee to prevent the face authentication from being attacked, the study of face presentation attack detection is developed in this community. In this work, a face presentation attack detector is designed based on residual color texture representation (RCTR). Existing methods lack of effective data preprocessing, and we propose to adopt DW-filter for obtaining residual image, which can effectively improve the detection efficiency. Subsequently, powerful CM texture descriptor is introduced, which performs better than widely used descriptors such as LBP or LPQ. Additionally, representative texture features are extracted from not only RGB space but also more discriminative color spaces such as HSV, YCbCr, and CIE 1976 L∗a∗b (LAB). Meanwhile, the RCTR is fed into the well-designed classifier. Specifically, we compare and analyze the performance of advanced classifiers, among which an ensemble classifier based on a probabilistic voting decision is our optimal choice. Extensive experimental results empirically verify the proposed face presentation attack detector’s superior performance both in the cases of intradataset and interdataset (mismatched training-testing samples) evaluation.

Highlights

  • Face authentication technology is widely deployed in real life

  • A reasonable assumption can be made that nuisance noise existing in the face image, including bona fide and presentation attacks (PAs) samples, might more or less impact the effectiveness of presentation attack detection, while the features extracted from the residual face image are more discriminative than that of original face image. erefore, we propose to apply DW-filter for residual image extraction

  • All face images are normalized into 64 × 64 size after face alignment; the facial landmarks are localized by using Dlib 19.14.0 [51]. e parameter settings of the descriptors are shown as follows: when extracting the co-occurrence matrix (CM) feature, two first-order differential operators are applied, the truncation threshold c 2, and the order is set as d 3

Read more

Summary

Introduction

Most existing face authentication systems are vulnerable to presentation attacks (PAs). Speaking, compared with the bona fide faces, the PA samples are generated by presenting spoofing artifacts toward face authentication system. Since deep learning (DL) shows its outstanding potential in resolving image classification tasks, numerous DL-based methods are proposed by utilizing deep networks to extract deep features from images such as [1,2,3,4,5,6]. It is known that DL-based methods can achieve excellent performance when obtaining enough training data, but in face presentation attack detection task, the diversity and amount of training data is often not satisfied, and overfitting is a vexing problem. To enable a presentation attack detection system be applicable to various environment, domain adaptation [7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call