Abstract
Face attributes prediction has an increasing amount of applications in human–computer interaction, face verification and video surveillance. Various studies show that dependencies exist in face attributes. Multi-task learning architecture can build a synergy among the correlated tasks by parameter sharing in the shared layers. However, the dependencies between the tasks have been ignored in the task-specific layers of most multi-task learning architectures. Thus, how to further boost the performance of individual tasks by using task dependencies among face attributes is quite challenging. In this paper, we propose a multi-task learning using task dependencies architecture for face attributes prediction and evaluate the performance with the tasks of smile and gender prediction. The designed attention modules in task-specific layers of our proposed architecture are used for learning task-dependent disentangled representations. The experimental results demonstrate the effectiveness of our proposed network by comparing with the traditional multi-task learning architecture and the state-of-the-art methods on Faces of the world (FotW) and Labeled faces in the wild-a (LFWA) datasets.
Highlights
Face attributes are useful to achieve detailed description of human faces
Inspired by the attention mechanism, we propose a multi-task learning using task dependencies architecture for face attributes prediction in this paper
We present experimental results which demonstrate that our proposed architecture outperforms the traditional multi-task learning architecture and show the effectiveness in comparison with the state-of-the-art methods on Faces of the world (FotW) and Labeled faces in the wild-a (LFWA) datasets
Summary
Face attributes are useful to achieve detailed description of human faces (e.g., smile, gender, age, etc.). The performance of face attributes prediction has been improved by using deep convolutional neural networks (DCNNs) [5,6,7,8,9,10]. Sci. 2019, 9, 2535 dependencies among face attributes in the task-specific layers of the multi-task learning architecture is a challenge problem. We propose a multi-task learning using task dependencies architecture for face attributes prediction and evaluate the performance with the tasks of smile and gender prediction. The transformed fully connected layers that contain task-dependent disentangled representations are fed into softmax layers to predict the final face attributes. We demonstrate the effectiveness of our proposed network by comparing with the traditional multi-task learning architecture and the state-of-the-art methods on FotW and LFWA datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.