Abstract

Attention-based techniques have been successfully used for rating image quality, and have been widely employed for set-based face recognition. Nevertheless, for video face recognition, where the base convolutional neural network (CNN) trained on large-scale data already provides discriminative features, fusing features with only predicted quality scores to generate representation are likely to cause duplicate sample dominant problem, and degrade performance correspondingly. To resolve the problem mentioned above, we propose a redundancy removing aggregation network (RRAN) for video face recognition. Compared with other quality-aware aggregation schemes, RRAN can take advantage of similarity information to tackle the noise introduced by redundant video frames. By leveraging metric learning, RRAN introduces a distance calibration scheme to align distance distributions of negative pairs of different video representations, which improves the accuracy under a uniform threshold. A series of experiments is conducted on multiple realistic data sets to evaluate the performance of RRAN, including YouTube Faces, IJB-A, and IJB-C. In comprehensive experiments, we demonstrate that our method can diminish the overall influence of poor quality components with large proportion in the video and further improve the overall recognition performance with individual difference. Specifically, RRAN achieves a 96.84% accuracy on YouTube Face, outperforming all existing aggregation schemes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.