7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access
7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access
https://doi.org/10.1109/icip.2017.8296255
Copy DOIPublication Date: Sep 1, 2017 |
Citations: 24 |
We propose a real-time and robust approach to estimate the full 3D head pose from extreme head poses using a monocular system. To this end, we first model the head using a simple geometric shape initialized using facial landmarks, i.e., eye corners, extracted from the face. Next, 2D salient points are detected within the region defined by the projection of the visible surface of the geometric head model onto the image, and projected back to the head model to generate the corresponding 3D features. Optical flow is used to find the respective 2D correspondences in the next video frame. Assuming that the monocular system is calibrated, it is then possible to solve the Perspective-n-Point (PnP) problem of estimating the head pose given a set of 3D features on the geometric model surface and their corresponding 2D correspondences from optical flow in the next frame. The experimental evaluation shows that the performance of the proposed approach achieves, and in some cases improves the state-of-the-art performance with a major advantage of not requiring facial landmarks (except for initialization). As a result, our method also applies to real scenarios in which facial landmarks-based methods fail due to self-occlusions.
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.