Abstract

A stroke is the basic limb movement that both humans and animals naturally and repetitiously perform. Having been introduced into gestural interaction, mid-air stroke gestures saw a wide application range and quite intuitive use. In this paper, we present an approach for building command-to-gesture mapping that exploits the semantic association between interactive commands and the directions of mid-air unistroke gestures. Directional unistroke gestures make use of the symmetry of the semantics of commands, which makes a more systematic gesture set for users’ cognition and reduces the number of gestures users need to learn. However, the learnability of the directional unistroke gestures is varying with different commands. Through a user elicitation study, a gesture set containing eight directional mid-air unistroke gestures was selected by subjective ratings of the direction in respect to its association degree with the corresponding command. We evaluated this gesture set in a following study to investigate the learnability issue, and the directional mid-air unistroke gestures and user-preferred freehand gestures were compared. Our findings can offer preliminary evidence that “return”, “save”, “turn-off” and “mute” are the interaction commands more applicable to using directional mid-air unistrokes, which may have implication for the design of mid-air gestures in human–computer interaction.

Highlights

  • In the field of human–computer interaction (HCI), gestures are known as a promising input in showing an interaction task or a command which semantically includes the action and the target object

  • We calculated agreement ratings (AR) of commands using the revised formula proposed by Vatavu and Wobbrock [62], see Formula 1, where P is the set of all proposed gestures for a referent r, |P| the size of the set, and Pi subsets of identical proposals from P

  • Participants have a stronger preference for certain freehand gestures based on their prior experience

Read more

Summary

Introduction

In the field of human–computer interaction (HCI), gestures are known as a promising input in showing an interaction task or a command which semantically includes the action and the target object. Comparing with touch-based gesture, touchless mid-air gesture, if robust, demonstrate advantages as a style of natural HCI without the constraint of physical interfaces. With a growing demand for various mid-air interaction scenarios, considerable attention has been devoted to the participatory design of gesture-based applications in recent years. For this purpose, user elicitation study became increasingly popular as a design approach to defining preferable gestures for a set of touchless interactions with remote displays or devices [2].

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call