Abstract

Gaze is an important non-verbal cue for speculating human’s attention, which has been widely employed in many human–computer interaction-based applications. In this paper, we propose an improved Itracker to predict the subject’s gaze for a single image frame, as well as employ a many-to-one bidirectional Long Short-Term Memory (bi-LSTM) to fit the temporal information between frames to estimate gaze for video sequence. For single image frame gaze estimation, we improve the conventional Itracker by removing the face-grid and reducing one network branch via concatenating the two-eye region images. Experimental results show that our improved Itracker obtains 11.6% significant improvement over the state-of-the-art methods on MPIIGaze dataset and has robust estimation accuracy for different image resolutions under the premise of greatly reducing network complexity. For video sequence gaze estimation, by employing the bi-LSTM to fit the temporal information between frames, experimental results on EyeDiap dataset further demonstrate 3% accuracy improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call