Abstract

Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

Highlights

  • Facial images convey a substantial amount of information such as the individual’s identity, ethnicity, gender, age, and emotional state [1]

  • For the Institute of Information Technology (IIIT)-Delhi facial aging dataset, the proposed fusion rule human perception based fusion scheme (HPFS) (Row 17 in Table 8) outperforms most of the existing algorithms

  • Similar to the results on IIIT-Delhi Facial Aging dataset, the results obtained on the FG-Net aging database suggest that the proposed HPFS (Row 17) outperforms traditional fusion schemes for both sets of experiments

Read more

Summary

Introduction

Facial images convey a substantial amount of information such as the individual’s identity, ethnicity, gender, age, and emotional state [1]. This knowledge plays a significant role during face-to-face communication between humans. Over the past few decades, many automatic face recognition algorithms have been developed It is crucial as well as challenging to develop an algorithm which is robust to variations such as pose, illumination, and expression. Another important challenge of face recognition is matching face images with age variations. For large-scale applications, adding invariance to aging is a very important requirement

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call