Abstract

Abstract The paper presents a detailed analysis of modern techniques that can be used to track gaze with a webcam. We present a practical implementation of the most popular methods for tracking gaze. Various models of deep neural networks that can be involved in the process of online gaze monitoring are reviewed. We introduce a new eye-tracking approach where the effectiveness of using a deep learning method is significantly increased. Implementation is in Python where its application is demonstrated by controlling interaction with the computer. Specifically, a dual coordinate system is given for controlling the computer with the help of a gaze. The first set of coordinates-the position of the face relative to the computer, is implemented by detecting color from the infrared LED via the OpenCV library. The second set of coordinates-giving gaze position-is obtained via the YOLO (v3) package. A method of labeling the eyes is given, in which 3 objects are used to track gaze (to the left, to the right, and in the center).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.