Abstract
Finger vein biometrics is one of the most promising ways to identify a person because it can provide uniqueness, protection against forgery, and bioassay. Due to the limitations of the imaging environments, however, the finger vein images that are taken can quickly become low-contrast, blurry, and very noisy. Therefore, more robust and relevant feature extraction from the finger vein images is still open research that should be addressed. In this paper, we propose a new technique of deep learning that is based on the attention mechanisms for human finger vein image identification and recognition and is called deep regional learning. Our proposed model relies on an unsupervised learning method that depends on optimized K-Means clustering for localized finger vein mask generation. The generated binary mask is used to build our attention learning model by making the deep learning structure focus on the region-of-interest (ROI) learning instead of learning the whole feature domain. This technique makes the Deep Regional Attention Model learn more significant features with less time and computational resources than the regular deep learning model. For experimental validation, we used different finger vein imaging datasets that have been extracted and generated using our model. Original finger vein images, localized finger vein images (with no background), localized grayscale finger vein images (grayscale images with no background and projected finger vein lines), and localized colored finger vein images (colored images with no background and projected finger vein lines) are used to train and test our model, which gets better results than traditional deep learning and other methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.