Palm-vein recognition has been the focus of large research efforts over the last years. However, despite the effectiveness of deep learning models, in particular Convolutional Neural Networks (CNNs), in automatically learning robust feature representations, thereby obtaining good accuracy, such good performance is usually obtained at the expense of annotating a large training dataset. Labeling vein images, however, is an expensive and tedious process. Although handcrafted schemes for data augmentation usually increase slightly performance, they are unable to cover complex variations inherently characterizing such images. To overcome this issue, we propose a new unsupervised domain adaptation model, called CycleGAN-based domain adaptation (CGAN-DA), that extracts discriminant representation from the palmvein images, without requiring any image labeling. Our CGAN-DA models allows a conjoint adaptation, at the image and feature levels. Specifically, in order to enhance the extracted features’ domain-invariance, image appearance is transformed across two domains, palm-vein domain and retinal domain. We employ several adversarial losses namely a segmentation loss and a cycle consistence loss to train our model without any annotation from the target domain (palm-vein images). Our experiments on the public CASIA palm-vein dataset demonstrates that our models significantly outperforms the s tart of the art in terms of verification accuracy.
Read full abstract