Abstract

Registration of multi-sensor data (particularly visible color sensors and infrared sensors) is a prerequisite for multimodal image analysis such as image fusion. In this paper, we proposed an automatic registration technique for visible and infrared face images based on silhouette matching and robust transformation estimation. The key idea is to represent a (visible or infrared) face image by its silhouette which is extracted from the image’s edge map and consists of a set of discrete points, and then align the two silhouette point sets by using their feature similarity and spatial geometrical information. More precisely, our algorithm first matches the silhouette point sets by their local shape features such as shape context, which creates a set of putative correspondences that may contaminated by outliers. Next, we estimate the accurate transformation from the putative correspondence set under a robust maximum likelihood framework combining with the EM algorithm, where the transformation between the image pair is modeled by a parametric model such as the rigid or affine transformation. The qualitative and quantitative comparisons on a publicly available database demonstrate that our method significantly outperforms other state-of-the-art visible/infrared face registration methods. As a result, our method will be beneficial for fusion-based face recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call