Abstract

An extensible multiple-model Kalman filter framework for eye tracking and video-oculography (VOG) applications is proposed. The Kalman filter predicts future states of a system on the basis of a mathematical model and previous measurements. The predicted values are then compared against the current measurements. In a correcting step, the predicted state is enhanced by the measurements. In this work, the Kalman filter is used for smoothing the VOG data, for on-line classification of eye movements, as well as for predictive real-time control of a gaze-driven head-mounted camera (EyeSeeCam). With multiple models running in parallel, it was possible to distinguish between fixations, slow-phase eye movements, and saccades. Under the assumption that each class of eye movement follows a distinct model, one can decide which types of eye movement occurred by evaluating the probability for each model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call