Abstract

In order to precisely predict 3-D gaze points, calibration is needed for each subject prior to first use the mobile gaze tracking system. However, traditional calibration methods normally expect the user to stare at predefined targets in the scene, which is troublesome and time-consuming. In this study, we proposed a novel method to remove the explicit user calibration and achieve robust 3-D gaze estimation in the room-scale area. Our proposed framework treats salient regions in the scene as possible 3-D locations of gaze points. To improve the efficiency of predicting 3-D gaze from visual saliency, the bag-of-word algorithm is adopted for eliminating redundant scene image data based on their similarities. After the elimination, saliency maps are generated from those scene images, and the geometrical relationship among the scene and eye cameras is obtained through aggregating 3-D salient targets with eye visual directions. Finally, we calculate the 3-D point of regard (PoR) by utilizing 3-D structures of the scene. The experimental results indicate that our method enhances the reliability of saliency maps and achieves promising performances on 3-D gaze estimation with different subjects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.