Abstract

Although many advanced works have achieved significant progress for face recognition with deep learning and large-scale face datasets, low-quality face recognition remains a challenging problem in real-word applications, especially for unconstrained surveillance scenes. We propose a texture-guided (TG) transfer learning approach under the knowledge distillation scheme to improve low-quality face recognition performance. Unlike existing methods in which distillation loss is built on forward propagation; e.g., the output logits and intermediate features, in this study, the backward propagation gradient texture is used. More specifically, the gradient texture of low-quality images is forced to be aligned to that of its high-quality counterpart to reduce the feature discrepancy between the high-and low-quality images. Moreover, attention is introduced to derive a soft-attention (SA) version of transfer learning, termed as SA-TG, to focus on informative regions. Experiments on the benchmark low-quality face DB's TinyFace and QMUL-SurFace confirmed the superiority of the proposed method, especially more than 6.3% Rank1 accuracy improvement is achieved on TinyFace.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call