Abstract

In this paper, we propose a salience-aware face presentation attack detection (SAFPAD) approach, which takes advantage of deep reinforcement learning to exploit the salient local part information in face images. Most existing deep face presentation attack detection approaches extract features from the entire image or several fixed regions. However, the discriminative information beneficial for presentation attack detection is unevenly distributed in the image due to the illumination and presentation attack instrument variation, so treating all regions equally fails to highlight the most discriminative information which is important for more accurate and robust face presentation attack detection. To address this, we propose to identify the discriminative salient parts using deep reinforcement learning and focus on them to alleviate the adverse effects of redundant information in the face images. We fuse the high-level features and the local features which guide the policy network to exploit discriminative patches and assist the classification network to predict more accurate results. We jointly train the SAFPAD model with deep reinforcement learning to generate salient locations. Extensive experiments on five public datasets demonstrate that our approach achieves very competitive performance due to the concentrated employment of salient local information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call