Abstract

This paper presents a new method for 3D face pose tracking in arbitrary illumination change conditions using color image and depth data acquired by RGB-D cameras (e.g., Microsoft Kinect, Asus Xtion Pro Live, etc.). The method is based on an optimization process of an objective function combining photometric and geometric energy. The geometric energy is computed from depth data while the photometric energy is computed at each frame by comparing the current face texture to its corresponding in the reference face texture defined in the first frame. To handle the effect of changing lighting condition, we use a facial illumination model in order to solve which lighting variations has to be applied to the current face texture making it as close as possible to the reference texture. We demonstrate the accuracy and the robustness of our method in normal lighting conditions by performing a set of experiments on the Biwi Kinect head pose database. Moreover, the robustness to illumination changes is evaluated using a set of sequences for different persons recorded in severe lighting condition changes. These experiments show that our method is robust and precise under both normal and severe lighting conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.