Abstract

Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine (“live”) ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the “points-to-center” triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call