Abstract

Fingerprint-based Authentication Systems (FAS) usage is increasing over the last years thanks to the growing availability of cheap and reliable scanners. In order to bypass a FAS by using a counterfeit fingerprint, a Presentation Attack (PA) can be used. As a consequence, a liveness detector able to discern authentic from fake biometry becomes almost essential in each FAS. Deep Learning based approaches demonstrated to be very effective against fingerprint presentation attacks, becoming the current state-of-the-art in liveness detection. However, it has been shown that it is possible to arbitrarily cause state-of-the-art CNNs to misclassify an image by applying on it a suitable small peturbation, often even imperceptible to human eyes. The aim of this work is to understand if and to what extent adversarial perturbation can affect FASs, as a preliminary step to develop an adversarial presentation attack. Results show that it is possible to exploit adversarial perturbation to mislead both the FAS liveness detector and the authentication system, by giving rise to images that are even almost imperceptible to human eyes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call