Abstract

Face Recognition is considered one of the most common biometric solutions these days and is widely used across a range of devices for various security purposes. The performance of FR systems has improved by orders of magnitude over the past decade. This is mainly due to the latest developments in computer vision and deep convolutional neural networks, and the availability of large training datasets. At the same time, these systems have been subject to various types of attacks. Presentation attacks are common, simple, and easy to implement. These simply involve presenting a video, photo, or mask to the camera or digital sensor and have proven capable of fooling FR systems and providing access to unauthorised users. Presentation attack detection is increasingly attracting more attention in the research community. A wide range of methods has already been developed to address this challenge. Deep learning-based methods in particular have shown very promising results. However, existing literature suggests that even with state-of-the-art methods, performance drops significantly in cross-dataset evaluation. We present a thorough, comprehensive, and technical review of existing literature on this timely and challenging problem. We first introduce and discuss the presentation attack problem and cover related and recent work in this area. In-depth technical details of existing presentation attack detection methods are then presented and critically discussed and evaluated, followed by a comprehensive discussion and evaluation of existing public datasets and commonly used evaluation metrics. Our review shows clearly that despite the recent and significant advances in this area of research, detecting unseen attacks is still considered a key problem. Machine learning methods tend to perform well, but only when test data comes from the same distribution as the training data (i.e. same dataset). New research directions are discussed in detail, including ways to improve the generalisation of machine learning methods, and move towards creating more stable presentation attack detection techniques that generalise across a wide range of unseen samples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.