Abstract

The increasing demand to develop a palmprint biometric system with a low-error rate has prompted scientists to use multispectral imaging to overcome the limits of the techniques that act in visible light. In order to improve the accuracy of multispectral palmprint recognition, we explore two level fusions: pixel and the feature level fusion approaches. The former is based on a maximum selection rule, which combines discriminating information from different spectral bands of discrete wavelet transform of multispectral images. The latter operates the fusion of features extracted from subimages. We propose to use both approaches for statistical and energy distribution analysis of the finite ridgelet transform coefficients, for the sake of their simplicity and low-computational complexity. Once the feature vectors are obtained, we perform a robust classification to identify/verify individuals with both approaches. The effectiveness of the proposed methods is evaluated on several classifiers for binary and multiclass cases. The experimental results conducted on Chinese Academy of Sciences Institute of Automation and Hong Kong Polytechnic University databases show that the proposed approaches ensure, respectively, an accuracy rate of 100% and 99.79%. A comparative study has revealed that our approach outperforms or at least equals the performances of the state-of-the-art multispectral palmprint recognition methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call