Abstract

Gesturing provides an alternative interaction input for design that is more natural and intuitive. However, standard input devices do not completely reflect natural hand motions in design. A key challenge lies in how gesturing can contribute to human–computer interaction, as well as understanding the patterns in gestures. This paper aims to analyze human gestures to define a gesture vocabulary for descriptive mid-air interactions in a virtual reality environment. We conducted experiments with twenty participants describing two chairs (simple and abstract) with different levels of complexity. This paper presents a detailed analysis of gesture distribution and hand preferences for each description task. Comparisons are drawn between the proposed approach to the definition of a vocabulary using combined gestures (GestAlt) and previously suggested methods. The findings state that GestAlt is successful in describing the employed gestures in both tasks (60% of all gestures for simple chair and 69% for abstract chair). The findings can be applied to the development of an intuitive mid-air interface using gesture recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.