Abstract

Deep-learning-based, cross-age face recognition has improved significantly in recent years. However, when using the discriminative method, it is still challenging to extract robust age-invariant features that can reduce the interference caused by age. In this paper, we propose a novel, effective, attention-based feature decomposition model, the age-invariant features extraction network, which can learn more discriminative feature representations and reduce the disturbance caused by aging. Our method uses an efficient channel attention block-based feature decomposition module to extract age-independent identity features from facial representations. Our end-to-end framework learns the age-invariant features directly, which is more convenient and can greatly reduce training complexity compared with existing multi-stage training methods. In addition, we propose a direct sum loss function to reduce the interference of age-related features. Our method achieves a comparable and stable performance. Experimental results demonstrate superior performance on four benchmarked datasets over the state-of-the-art. We obtain the relative improvements of 0.06%, 0.2%, and 2.2% on the cross-age datasets CACD-VS, AgeDB, and CALFW, respectively, and a relative 0.03% improvement on a general dataset LFW.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.