Abstract

In video-based face recognition, different video sequences of the same subject contain variations in pose, illumination, and expression which contribute to the challenges in designing an effective video-based face-recognition system. In this paper, we propose a dictionary-based approach using dense and high-dimensional features extracted from multi-scale patches centered at detected facial landmarks for video-to-video face identification and verification. Experiments using unconstrained video sequences from Multiple Biometric Grand Challenge (MBGC) and Face and Ocular Challenge Series (FOCS) datasets show that our method performs significantly better than many state-of-the-art video-based face recognition algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call