Abstract

This paper proposes a new method to perform real-time face pose estimation for ±90° yaw rotations and under low light conditions. The algorithm works on the basis of a completely automatic and run-time incremental 3D face modelling. The model is initially made up upon a set of 3D points derived from stereo grey-scale images. As new areas of the subject face appear to the cameras, new 3D points are automatically added to complete the model. In this way, we can estimate the pose for a wide range of rotation angles, where typically 3D frontal points are occluded.We propose a new feature re-registering technique which combines views of both cameras of the stereo rig in a smart way in order to perform a fast and robust tracking for the full range of yaw rotations. The Levenberg–Marquardt algorithm is used to recover the pose and a RANSAC framework rejects incorrectly tracked points.The model is continuously optimised in a bundle adjustment process that reduces the accumulated error on the 3D reconstruction.The intended application of this work is estimating the focus of attention of drivers in a simulator, which imposes challenging requirements. We validate our method on sequences recorded in a naturalistic truck simulator, on driving exercises designed by a team of psychologists.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.