Abstract

Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning ( $\text{M}^2\text{DL}$ ). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of $\text{M}^2\text{DL}$ .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call