Abstract

Face liveness detection is a significant research topic in face-based online authentication. The current face liveness detection approaches utilize either static or dynamic features, but not both. In fact, the dynamic and static features have different advantages in face liveness detection. In this paper, we propose a scheme combining dynamic and static features to capture merits of them for face liveness detection. First, the dynamic maps are captured from the inter-frame motion in the video, which investigates motion information of the face in the video. Then, with a Convolutional Neural Network (CNN), the dynamic and static features are extracted from the dynamic maps and the frame images, respectively. Next, in CNN, the fully connected layers containing the dynamic and static features are concatenated to form a fused feature. Finally, the fused features are used to train a binary Support Vector Machine (SVM) classifier, which classifies the frames into two categories, i.e. frame with real or fake face. Experimental results and the corresponding analysis demonstrate that the proposed scheme is capable of discovering face liveness by fusing dynamic and static features and it outperforms the current state-of-the-art face liveness detection approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call