Abstract

This paper presents a novel approach for face recognition based on the fusion of the appearance and depth information at the match score level. We apply passive stereoscopy instead of active range scanning as popularly used by others. We show that present-day passive stereoscopy, though less robust and accurate, does make positive contribution to face recognition. By combining the appearance and disparity in a linear fashion, we verified experimentally that the combined results are noticeably better than those for each individual modality. We also propose an original learning method, the bilateral two-dimensional linear discriminant analysis (B2DLDA), to extract facial features of the appearance and disparity images. We compare B2DLDA with some existing 2DLDA methods on both XM2VTS database and our database. The results show that the B2DLDA can achieve better results than others.

Highlights

  • A great amount of research effort has been devoted to face recognition based on 2D face images [1]

  • The face recognition experiments are performed on the XM2VTS database and the Mega-D database, respectively, to verify the improvement of the recognition rate by combining 2D and 3D information

  • Input: A1, A2, . . . , An, ml, mr % Ai are the n images, and ml and mr are the number of the % discriminant components of left and right bilateral 2DLDA (B2DLDA) transform

Read more

Summary

Introduction

A great amount of research effort has been devoted to face recognition based on 2D face images [1]. One of the multimodal approaches is 2D plus 3D [3,4,5,6,7]. A 3D representation provides an added dimension to the useful information for the description of the face. This is because 3D information is relatively insensitive to change in illumination, skin-color, pose, and makeup; that is, it lacks the intrinsic weakness of 2D approaches. They are localized in hair, eyebrows, eyes, nose, mouth, facial hairs, and skin color precisely, where 3D capture is difficult and not accurate

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call