Abstract

Multimodal biometric systems are recently gaining considerable attention for human identity recognition in uncontrolled scenarios. This chapter presents an improved multimodal biometric recognition by integrating ear and profile face biometrics. At first, both modalities are separately decomposed into a predefined number of scales and orientations using steerable pyramid transform, and then texture features are extracted from each subbands using a histogram-based local descriptor. Three popular local descriptors, local directional patterns, binarized statistical image features, and local phase quantization, are employed and their effectiveness is compared to explore the most discriminative texture descriptor. Finally, the local descriptors of both modalities are fused at the feature level as well as the score level for the recognition of individuals using kNN classifier. Several experiments are conducted on two standard datasets, University of Notre Dame collection E (UND-E) and collection J2 (UND-J2), and the results demonstrate that the proposed multimodal approach using score-level fusion outperforms feature-level fusion. Also, it achieves a higher accuracy compared to unimodal ear and state-of-the-art multimodal biometrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call