Abstract
Face aging process is non-stationary since human matures in different ways. This property makes age estimation is an attractive and challenging research topic in the computer vision community. Most of previous work conventionally estimate age from the center area of the aligned face image. However, these methods ignore spatial context information and cannot pay attention to particular domain features due to the uncertainty in deep learning. In this work, we propose a novel Deep Multi-Input Multi-Stream Ordinal (D2MO) Model for facial age estimation, which learns deep fusion feature through a specific spatial attention mechanism. Our approach is motivated by the observations that there are some universal changes, like hair color turning to white and wrinkles increasing, for individuals during aging process. In order to focus these spatial features, our D2MO uses four scales of receptive fields for global and contextual feature learning, and meanwhile, four cropped face patches are utilized for local and detailed feature extraction. Benefiting from a multi-stream CNN architecture, differentiated feature maps are learned separately through each branch and then aggregated together by concatenate layer. We also introduce a novel representation for age label using a multi-hot vector and the final predicted age can be calculated by summing the vector. This representation cast age estimation task to solve a series of binary classification subproblems which is easier to learn and more consistent with human cognition rather than to regress a single age value directly. Finally, we employ a joint training loss to supervise our model to learn the ordinal ranking, label distribution and regression information simultaneously. Extensive experiments show that our D2MO model significantly outperforms other state-of-the-art age estimation methods on MORPH II, FG-NET and UAGD datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.