Abstract

Over the last few years, a few researchers have made attempts to bridge the gap between Training (high-resolution gallery) and Testing (degraded, low quality probes) sets for Face Recognition under the surveillance conditions, using efficient low-level processing and statistical learning methods. In this paper, this challenging task of FR in degraded conditions have been handled using a Bag-of-Words (BOW) based approach for FR, combined with Domain Adaptation (DA). An adaptive-SIFT feature, extracted with spatially varying density at the fiducial points on the face. The SIFT features are used to form the dictionary of BOW and then combined with Local Binary Patterns (LBP). The sampling of the keypoints is denser in the discriminative parts of the face, while it is sparsely sampled at some less-interesting (pre-decided) zones of the face. An unsupervised method of DA has been used to boost the performance of FR using a BOW-based face representation. The unsupervised method of DA considers the source domain as the Training set and the target domain as the Test set. The novelty and contribution of this work on FR for surveillance applications, is the estimation of the transformation from source to target, based on the eigen-vectors of the source and the target domains using the discriminative BOW-based representation which combines two discriminative features extracted from the face. Performance analysis of the proposed method using ROC and CMC measures along with retrieval results on two real-world surveillance face datasets, shows the superiority over recent state-of-the-art techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.