Abstract

Face liveness detection is an interesting research topic in face-based online authentication. The current face liveness detection algorithms utilize either static or dynamic features, but not both. In fact, the dynamic and static features have different advantages in face liveness detection. In this paper, we discuss a scheme to combine dynamic and static features that combines the strength of each. First, the dynamic maps are obtained from the inter frame motion in the video. Then, using a Convolutional Neural Network (CNN), the dynamic and static features are extracted from the dynamic maps and the images, respectively. Next, the fully connected layers from the CNN that include the dynamic and static features are connected to form the fused features. Finally, the fused features are used to train a two-value Support Vector Machine (SVM) classifier, which classify the images into two groups, images with real faces and images with fake faces. We conduct experiments to assess our algorithm that includes classifying images from two public databases. Experimental results demonstrate that our algorithm outperforms current state-of-the-art face liveness detection algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call