Finger vein recognition is a promising biometric authentication technique that depends on the unique features of vein patterns in the finger for recognition. The existing finger vein recognition methods are based on minutiae features or binary features such as LBP, LLBP, PBBM etc. or from the entire vein pattern. However, the minutiae-based features cannot accurately represent the structural or anatomical aspects of the vein pattern. These issues with the minutia feature led to increased false matches. Recognition based on binary features have limitations such as increased false matches, sensitivity to the translation and rotation, security and privacy issues etc. A feature representation based on the anatomy of vein patterns can be an alternative solution to improve the recognition performance. In the IJCB 2020 conference, we showed that every finger vein image contains one or more of a kind of 4 special vein patterns which we refereed as Fork, Eye, Bridge, and Arch (FEBA). In this paper, we further enlarge this set to 6 vein patterns (F <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> F <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> EB <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> B <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> A) by identifying two variations in the Fork and Bridge vein patterns. Based on 6 anatomical features of the possible 6 vein patterns in a vein image, we define a 6 × 6 feature matrix representation for finger vein images. Since this feature representation is based on the anatomical properties of the local vein patterns, it provides template security. Further we show that, the proposed feature representation is invariant to scaling, translation, and rotation changes. The experimental results using two open datasets and an in-house dataset show that the proposed method has a better recognition performance when compared to the existing approaches with an EER around 0.02% and an average recognition accuracy of 98%.
Read full abstract