Abstract
We propose a 3D gaze-tracking method that combines accurate 3D eye- and facial-gaze vectors estimated from a Kinect v2 high-definition face model. Using accurate 3D facial and ocular feature positions, gaze positions can be calculated more accurately than with previous methods. Considering the image resolution of the face and eye regions, two gaze vectors are combined as a weighted sum, allocating more weight to facial-gaze vectors. Hence, the facial orientation mainly determines the gaze position, and eye-gaze vectors then perform minor manipulations. The 3D facial-gaze vector is first defined, and the 3D rotational center of the eyeball is then estimated; together, these define the 3D eye-gaze vector. Finally, the intersection point between the 3D gaze vector and the physical display plane is calculated as the gaze position. Experimental results show that the average gaze estimation root-mean-square error was approximately 23 pixels from the desired position at a resolution of $$1920\times 1080$$1920×1080.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.