Abstract

In this paper we present a robust multimodal depth and intensity face verification approach by the scores fusion of two multimodal systems (multi-algorithms: multiple descriptors fusion and multi-sensors: depth and intensity fusion). First, we correct the rotations of the head in all 3D by Iterative Closest Point (ICP), and then we present our preprocessing algorithm. For feature extraction, we use 4 local methods Local Binary Patterns (LBP) based on the coding of a local neighborhood.Statistical Local Features (SLF), based on the calculation of some statistical parameters in a neighborhood of the pixel of interest.Binarized Statistical Image Features (BSIF) for computes a binary code for each pixel by linearly projecting local image patches onto a subspace.Local Phase Quantization (LPQ) for quantizing the Fourier transform phase in local neighborhoods. For classification, we use the normalized correlation distance after the space reducing method Principal Component Analysis (PCA) followed by Enhanced Fisher linear discriminant Model (EFM). The fusion at the scores level by a good classifier with two classes called support vector machines (SVM). Experiments are performed on the CASIA 3D and Bousphorus faces databases. The experimental results show that the local descriptor proposed (SLF) is better than all the descriptors studied. Fusion descriptor (SLF) with the descriptor BSIF + LBP gives better results for two databases. In our system, all processes are performed automatically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call