Abstract

Most affordable eye tracking systems use either intrusive setup such as head-mounted cameras or use fixed cameras with infrared corneal reflections via illuminators. In the case of assistive technologies, using intrusive eye tracking systems can be a burden to wear for extended periods of time and infrared based solutions generally do not work in all environments, especially outside or inside if the sunlight reaches the space. Therefore, we propose an eye-tracking solution using state-of-the-art convolutional neural network face alignment algorithms that is both accurate and lightweight for assistive tasks such as selecting an object for use with assistive robotics arms. This solution uses a simple webcam for gaze and face position and pose estimation. We achieve a much faster computation time than the current state-of-the-art while maintaining comparable accuracy. This paves the way for accurate appearance-based gaze estimation even on mobile devices, giving an average error of around 4.5°on the MPIIGaze dataset [1] and state-of-the-art average errors of 3.9°and 3.3°on the UTMultiview [2] and GazeCapture [3], [4] datasets respectively, while achieving a decrease in computation time of up to 91%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call