Abstract

With the rapid globalization of technology in the world, the need for a more reliable and secure online method of authentication is required. This can be achieved by using each individual’s distinctive biometric identifiers, such as the face, iris, fingerprint, palmprint, etc.; however, there is a bound to the uniqueness of each identifier and consequently, a limit to the capacity that a biometric recognition system can sustain before false matches occur. Therefore, knowing the limitations on the maximum population that a biometric modality can uniquely represent is essential now more than ever. In an effort to address the general problem, we turn to the use of iris biometrics to measure its uniqueness. The measure of iris uniqueness was first introduced by John Daugman in 2003 and its analysis since then remained an open research problem. Daugman defines uniqueness as the ability to enroll more and more classes into a recognition system while the probability of collision among the classes remains fixed and near zero. Due to errors while collecting these datasets (such as occlusions, illumination conditions, camera noise, motion, and out-of-focus blur) and quality degradation from any signal processing of the iris data, even the highest in-quality datasets will not approach a perfect zero probability of collision. Because of this, we appeal to techniques presented in information theory to analyze and find the maximum possible population the system can support while also measuring the quality of the iris data present in the datasets themselves. The focus of this work is divided into two new techniques to find the maximum population of an iris database: finding the limitations of Daugman's widely accepted IrisCode and proposing a new methodology leveraging the raw iris data. Firstly, Daugman's IrisCode is defined as binary templates representing each independent class present in the database. Through the assumption that a one-to-one encoding technique is available to map the IrisCode of each class to a new binary codeword with the length determined by the degrees of freedom inferred from the distribution of distances between each pair of independent class IrisCodes, we can appeal to Rate-Distortion Theory (limits of error-correcting codes) to establish bounds on the maximum population the IrisCode algorithm can sustain using the minimum Hamming distance (HD) between codewords as a quality metric. Our second approach leverages an Autoregressive (AR) model to estimate each iris class's distinctive power spectral densities and then assume a similar one-to-one mapping

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.