Abstract

Facial biometric systems are vulnerable to fraudulent access attempts by presenting photographs or videos of a valid user in front of the sensor also known as “spoofing attacks”. Multiple protection measures have been proposed but limited attention has been dedicated to exclusive motion-based countermeasures since the arrival of video and mask attacks. A novel motion-based countermeasure which exploits natural and unnatural motion cues is presented. The proposed method takes advantage of the Conditional Local Neural Fields (CLNF) face tracking algorithm to extract rigid and non-rigid face motions. Similarly to the bag-of-words feature encoding, a vocabulary of motion sequences is constructed to derive discriminant mid-level motion features using the Fisher vector framework. Extensive experiments are conducted on ReplayAttack-DB, CASIA-FASD and MSU-MFSD databases. Complementary experiments on rigid mask attacks from the 3DMAD public database are also conducted and generalization issues are investigated via cross-database evaluation in particular.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call