Abstract
Despite recent advances of deep neural networks in hand vein identification, the existing solutions assume the availability of a large and rich set of training image samples. These solutions, therefore, still lack the capability to extract robust and discriminative hand-vein features from a single training image sample. To overcome this problem, we propose a single-sample-per-person (SSPP) palm-vein identification approach, where only a single sample per class is enrolled in the gallery set for training. Our approach, named MSMDGAN + CNN, consists of a multi-scale and multi-direction generative adversarial network (MSMDGAN) for data augmentation and a convolutional neural network (CNN) for palm-vein identification. First, a novel data augmentation approach, MSMDGAN, is developed to learn the internal distribution of patches in a single image. The proposed MSMDGAN consists of multiple fully convolutional GANs, each of which is responsible for learning the patch distribution within an image at a different scale and at a different direction. Second, given the resulting augmented data by MSMDGAN, we design a CNN for single sample palm-vein recognition. The experimental results on two public hand-vein databases demonstrate that MSMDGAN is able to generate realistic and diverse samples, which, in turn, improves the stability of the CNN. In terms of accuracy, MSMDGAN + CNN outperforms other representative approaches and achieves state-of-the-art recognition results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Information Forensics and Security
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.