Abstract

Multimodal biometric recognition has attracted intensive attention due to its significant performance improvement for personal authentication by exploiting multiple resources of data. However, most existing multimodal biometric recognition methods tend to fuse completely different biometric traits, making them hard to explore the complementary features of the multimodal data. In this paper, we propose a new multimodal biometric descriptor by jointly learning the multi-type collaborative features of multi-modality finger-knuckle-print (FKP) images. First, we form multi-type feature vectors to capture the texture and direction patterns of the multimodal FKP images. Then, we jointly learn the feature projection to map multi-type vectors into a compact FKP descriptor. Moreover, our method automatically selects the optimal weights for multi-type features during feature learning to make the learned feature codes discriminative. Lastly, we integrate the non-overlapping block-wise histograms of the learned binary codes as the final multimodal FKP feature descriptor. The experimental results conducted on the benchmark PolyU FKP database demonstrate the effectiveness of the proposed method for multimodal FKP recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call