Abstract

For multimodal biometric person recognition, information fusion can be classified into several levels: rank, decision, sensor, feature and match-score levels. In this paper, a novel method is proposed to fuse information from two or more biometric sources at feature fusion level. A key aspect of the method is to use an optimisation procedure to regulate the contribution of each individual biometric modality to the concatenated feature vector. As an example, the effectiveness of the method is demonstrated by integrating features of static face images and text-independent speech segments. Experiments in feature-level fusion are carried out for 40 subjects from a virtual database consisting of face images and speech clips, and the results show that the proposed method outperforms those without feature fusion and those based on intuition feature fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call