Abstract

Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.

Highlights

  • The rapid growth in deep learning and in particular convolutional neural networks (CNNs) brings new solutions to many problems in computer vision, big data [1], and security [2]

  • 4.1 Datasets The fingerprint datasets used in this paper are from Liveness Detection Competition (LivDet), containing the years 2013 [42] and 2015 [43], namely, LiveDet2013 and LiveDet2015(Table 1)

  • LivDet 2013 consists of fingerprint images captured by four different sensors

Read more

Summary

Introduction

The rapid growth in deep learning and in particular convolutional neural networks (CNNs) brings new solutions to many problems in computer vision, big data [1], and security [2]. While deep networks have seen phenomenal success in many domains, Szegedy et al [10] first demonstrated that through intentionally adding certain tiny perturbations, an image remains indistinguishable to original image but networks probably misclassify it as other classes instead of the original prediction. This is called adversarial attack and the perturbed image is the namely adversarial sample. It is interesting that we notice the perturbation images show some similarity with the encrypted images [12–16], but the former are

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.