Abstract

Iris pattern recognition has significantly improved the biometric authentication field due to its high stability and uniqueness. Such physical characteristics have played an essential role in security applications and other related areas. However, presentation attacks, also known as spoofing techniques, can bypass biometric authentication systems using artefacts such as printed images, artificial eyes, textured contact lenses, etc. Many liveness detection methods that improve the robustness of these systems have been proposed. The first International Iris Liveness Detection competition, where the effectiveness of liveness detection methods is evaluated, was first launched in 2013, and its latest iteration was held in 2020. In this paper, we present the approach that won the LivDet-Iris 2020 competition using two-class scenarios (bona fide iris images vs. presentation attack iris images). Additionally, we propose new three-class and four-class scenarios that complement the competition results. These methods use a serial architecture based on a MobileNetV2 modification, trained from scratch to classify bona fide iris images versus presentation attack images. The bona fide class consists of live iris images, whereas the attack presentation instrument classes consist of cadaver, printed, and contact lenses images, for a total of four species. All the images were pre-processed and weighted per class to present a fair evaluation. This approach is primarily focused on detecting the bona fide class over improving the detection of presentation attack instruments. For the two, three, and four classes scenarios BPCER <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">10</sub> values of 0.99%, 0.16%, and 0.83% were obtained respectively, whereas for the BPCER <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">20</sub> values of 3.09%, 0.16%, and 3.77% were obtained, with the best model overall being the proposed 3-class serial model. This work reaches competitive results according to the reported results in the LivDet-Iris 2020 competition.

Highlights

  • I RIS recognition systems has been shown to be robust over time, affordable, non-invasive, and touchless; these strengths will allow it to grow in the market in the coming years [1]

  • Database: This paper presents two new databases, one database to increase the number of bona fide images (10,000), and a second database to increase the number of printed Presentation Attack Instruments (PAI) with high-quality images (1,800)

  • These images present a problem for the classifier because the PAI species are not represented

Read more

Summary

Introduction

I RIS recognition systems has been shown to be robust over time, affordable, non-invasive, and touchless; these strengths will allow it to grow in the market in the coming years [1]. One example is the hacking of Samsung Galaxy S8 devices with the iris unlock system, using a regular printer and a contact lens. This case has been reported to the public from hacking groups attempting to get recognition for real criminal cases, including from live biometric demonstrations at conferences. An ideal PAD technique should be able to detect all of these attacks, along with any new or unknown PAI species that may be developed in the future [4]. PAD for iris recognition systems is a very dynamic topic, as has been shown in past editions of the LivDet competition, revealing that there are still open problems to get efficient methods for usage in capturing devices. This paper contributes to improving the state of the art, add a new database and explains the methodology used for the winning team

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.