Abstract

AbstractThis paper presents a two‐stage procedure to combine multiple face traits for identity authentication. At the first stage, a high dimensional random projection is applied to the raw visual and infrared face images to extract useful information relevant to each identity. This is followed by a dimension reduction using eigenfeature regularization and extraction (ERE). At the second stage, the scores from two verification systems based on each face modality are fused by an error minimization algorithm. This error minimization algorithm directly optimizes the verification accuracy by adjusting the parameters of a polynomial classifier. Two data sets consisting of visual and infrared face images have been used for experimentation. Our empirical observation shows encouraging results regarding the effectiveness of the proposed method. Copyright © 2011 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call