Abstract

Despite the utility of gaze gestures as an input method, there is a lack of guidelines available regarding how to design gaze gestures, what algorithms to use for gaze gesture recognition, and how these algorithms compare in terms of performance. To facilitate the development of applications that leverage gaze gestures, we have evaluated the performance of a combination of template-based and data-driven algorithms on two custom gesture sets that can map to user actions. Template-based algorithms had consistently high accuracies but the slowest runtimes, making them best for small gesture sets or accuracy-critical applications. Data-driven algorithms run much faster and scale better to larger gesture sets, but require more training data to achieve the accuracy of the template-based methods. The main takeaways for gesture set design are 1) gestures should have distinct forms even when performed imprecisely and 2) gestures should have clear key-points for the eyes to fixate onto.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call