Abstract

Foot-based gestures enable people to interact with mobile and wearable devices when their hands are unavailable for interaction. For the foot gestures to be truly usable, the gestures should be recognizable by the system without being confused by daily activities and still be easy to perform. However, designing such gestures often requires multiple iterations of gesture design, model training, and evaluation. In this paper, we present SequenceSense, a tool developed to help designers efficiently design a usable gesture set using inertial sensors, which eliminates the need for multiple data collection studies to evaluate the gestures’ usability through gesture modification by sequencing atomic actions and instant false positive analysis, and instead requires only the initial gesture sample collection. Unlike gesture recognizers using complete gestures to train a model, SequenceSense segments gesture into a sequence of atomic actions. For example, a foot tap to the right may have (1) lift the foot, (2) move the foot to the right, and (3) land the foot. SequenceSense also compares the gesture sequence with the sequence database created from the daily activities to identify possible conflicts. This allows gesture designers to build easily usable foot-based gestures without the need for recollecting and evaluating gestures. We validated SequenceSense’s efficacy in designing usable gestures with low false positives through a user study with nine gesture designers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call