Abstract

As performance on some aspects of the Labeled Faces in the Wild (LFW) benchmark approaches 100% accuracy, there is an intense debate on whether unconstrained face verification problem has already been solved. In this paper, we study a new face verification problem that assumes the imposter would deliberately seek a people with similarly-looking face to invade the biometric system. To simulate this deliberate imposture attack, we first construct a Fine-Grained LFW (FGLFW) database, which deliberately selects 3000 similarly-looking face pairs within original image folders by human crowdsourcing to replace the negative pairs of LFW. Our controlled human survey reports 99.85% accuracy on LFW, but only 92.03% accuracy on FGLFW. As the algorithm baselines, we evaluate several state-of-the-art metric learning, face descriptors, and deep learning methods on the new FGLFW database, and their accuracy drops about 10–20% compared to the corresponding LFW performance. To address this challenge, we develop a Deep Convolutional Maxout Network (DCMN) which aim to tolerate the multi-modal intra-personal variations and distinguish fine-grained localized inter-personal facial details. The experimental results suggest that the proposed DCMN method significantly outperforms current techniques such as Deepface, DeepID2, and VGG-Face. Fusion of the scores of our proposed DCMN to that of human operators notably boost the verification accuracy from 92–96%, suggesting that human-algorithm partnerships are promising to detect the similarly-looking deliberate impostors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call