Abstract

An ideal biometric template security method must secure the templates without compromising on matching performance. Many biometric template protection methods have been reported in recent years, but most of them have a tradeoff between matching performance and template security. Some of these approaches also require re-enrollment in case the biometric templates are compromised. In this work, we propose a method for face template protection which improves the matching performance while providing high template security and also addresses the re-enrollment problem. Our approach relies on computing identity or class specific perturbations to the input facial feature vectors as a function of gradients of mapping network as in targeted adversarial learning. Further a cryptographic one-way hash function is applied on the target specific class labels and the hashes are stored as templates in database during enrollment. During verification, given an input face image of a user, the extracted facial features from Convolutional Neural Network(CNN) along with the pre-computed perturbations are used to predict the template and matched against the corresponding template of the user stored during enrollment. If any of the templates is compromised, it is revoked and a new set of perturbations for the corresponding user is computed with respect to the new target specific class label assigned to the user. The efficacy of the approach is evaluated on three face datasets, namely, CMU-PIE, FEI and Color-FERET. The proposed method achieves ~98% Genuine Accept Rate (GAR) at zero False Accept Rate (FAR). This approach outperforms the state of the art by ~7% in terms of matching performance, while solving the re-enrollment problem without compromising on the template security largely due to the way the perturbations are computed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call