Abstract

Face recognition in practical situations often encounters the change of acquisition sensors, i.e., the gallery images are captured in near-infrared and probe images captured in the visual domain or vice versa. Thus, for robust face recognition under such circumstance, a modality invariant representation is required. This letter proposes a hash-encoding-based descriptor to address this modality gap for face recognition problem, namely linear cross-modal hash encoding (LCMHE). LCMHE encodes the pixels of the face images based on a pretrained hash dictionary. The encoding highlights the reflectance part of the face and enables accurate recognition. We further explored the possibility of LCMHE by combining the deep-learning model to get better recognition. We experimented with publicly available CASIA NIR-VIS 2.0 databases and verified the effectiveness of our proposed letter.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call