Abstract
Removing explicit user calibration is indeed an appealing goal for gaze tracking systems. In this paper, a novel auto-calibration method is proposed to achieve the 3D point of regard (PoR) prediction for the head-mounted gaze tracker. Our method chooses an RGBD sensor as the scene camera to capture 3D structures of the environment and treats salient regions as possible 3D calibration targets. In order to improve efficiency, the bag of words (BoW) algorithm is applied to calculate scene images’ similarity and eliminate redundant maps. After elimination, the translation relationship between eye cameras and the scene camera can be determined by uniting calibration targets with gaze vectors, and 3D gaze points are obtained by transformed gaze vectors and the point cloud of environment. The experiment results indicate that our method achieves effective performance on 3D gaze estimation for head-mounted gaze trackers, which can promote engineering applications of human-computer interaction technology in many areas.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.