Abstract

We focus on attacks against a biometric authentication system aimed at reconstructing a biometric sample of the subject from the protected template. Such systems include three blocks: feature extraction, binarization, and protection. We propose a new white-box reversing attack on the binarization block that approximates a biometric template given the binary string obtained by the binarization block. The experiments show that the proposed attack reconstructs very accurate approximations that pass the verification threshold when compared to templates produced from the same and different samples of the subject. We then integrate this attack with known attacks on the other two blocks, namely, a variant of a guessing attack to extract the binary string and biometric inversion attack to reconstruct a sample from its template. We instantiate this end-to-end attack on a face authentication system using fuzzy commitments for protection. Facial images reconstructed by the end-to-end attack greatly resemble the original ones. In the simplest attack scenario, more than 83% of these reconstructed templates succeed in unlocking an account (when the system is configured to 0.1% FMR). Even in the “hardest” settings (in which we take a reconstructed image from one system and use it in a different system, with a different feature extraction process) the reconstructed image offers 170 to 210 times higher success rates than the system’s FMR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.