Abstract

In this paper, we propose a framework for face recognition in real-world noisy videos. The difficulty of video face recognition task lies in the challenging appearance variations in real-world videos due to motion blur, large head rotation, occlusion, illumination change and significant image noise. We utilize a non-rigid face tracking approach to address these problems, which makes good use of 3D face shape priors, local appearance model of major facial features, face silhouette and online feature matches across video frames. The benefits are twofold. Firstly, the 3D tracking algorithm can achieve accurate registration of faces in videos. Compared with the state-of-the-art approaches which rely on discriminative appearance models to classify face images into different views, we directly estimate face pose for a view-based face recognition algorithm. Secondly, since the 3D tracking algorithm has a probabilistic form and can provide confidence measure on the tracking result, it can be used to improve the robustness of face recognition. With the precisely localized faces, the recognition process is performed with different feature descriptors. The experiments performed on the real world noisy videos from YouTube demonstrate a significant improvement achieved even with the usage of simple descriptors: the rank-1 recognition rate reaches 79.8%, while the best reported results from other works is 71.24% on the same dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.