Abstract

We propose HeadGesture, a hands-free input approach to interact with Head Mounted Display (HMD) devices. Using HeadGesture, users do not need to raise their arms to perform gestures or operate remote controllers in the air. Instead, they perform simple gestures with head movement to interact with the devices. In this way, users' hands are free to perform other tasks, e.g., taking notes or manipulating tools. This approach also reduces the hand occlusion of the field of view [11] and alleviates arm fatigue [7]. However, one main challenge for HeadGesture is to distinguish the defined gestures from unintentional movements. To generate intuitive gestures and address the issue of gesture recognition, we proceed through a process of Exploration - Design - Implementation - Evaluation. We first design the gesture set through experiments on gesture space exploration and gesture elicitation with users. Then, we implement algorithms to recognize the gestures, including gesture segmentation, data reformation and unification, feature extraction, and machine learning based classification. Finally, we evaluate user performance of HeadGesture in the target selection experiment and application tests. The results demonstrate that the performance of HeadGesture is comparable to mid-air hand gestures, measured by completion time. Additionally, users feel significantly less fatigue than when using hand gestures and can learn and remember the gestures easily. Based on these findings, we expect HeadGesture to be an efficient supplementary input approach for HMD devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call