Abstract

Background: Artificial intelligence has made significant contributions to facial recognition and biometric identification and is now being employed in a range of applications. Detecting facial spoofing, where someone attempts to pass as an authorized user to gain access to the system, is still difficult. Spoofing-attack-resistant face recognition systems demand efficient and effective solutions. A more stringent recognition system will result in higher false positives and false negatives, which makes such a system questionable for practical use. Eventually, the prominent deep-learning techniques were overtaken by CNN-based architecture. Objective: To analyse classifiers to identify the impact on spoof detection. The intent is not only to get the highest accuracy but also to find strategies to significantly reduce false positives and false negatives. Methods: Face image spoofing detection is implemented in this paper by extracting face embedding using the Local Binary Pattern (LBP) and the VGG16 CNN architecture. To classify real and spoof images, SVM, KNN, Decision Tree, and ensembles of classifier models are utilized. Results: The proposed three models obtained test accuracy of 98%, 94.48%, and 99% when applied to the custom dataset, while in the NUAA photography imposter dataset, they achieved 97%, 99%, and 100% and kept the FN and FP significantly low. Conclusion: Accessing human faces through smart gadgets from various resources is possible, leading to the possibility of spoof attacks. Although spoof detection methods persist, effective methods with high accuracy and low FN and FP are still required. The proposed ensemble techniques significantly outperform the existing classifiers with high accuracy, keeping FN and FP low.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call