Abstract

If mid-air interaction is to be implemented in smart home environments, then the user would have to exercise in-air gestures to address and manipulate multiple devices. This paper investigates a user-defined gesture vocabulary for basic control of a smart home device ecosystem, consisting of 7 devices and a total of 55 referents (commands for device) that can be grouped to 14 commands (that refer to more than one device). The elicitation study was conducted in a frame (general scenario) of use of all devices to support contextual relevance; also, the referents were presented with minimal affordances to minimize widget-specific proposals. In addition to computing agreement rates for all referents, we also computed the internal consistency of user proposals (single-user agreement for multiple commands). In all, 1047 gestures from 18 participants were recorded, analyzed, and paired with think-aloud data. The study reached to a mid-air gesture vocabulary for a smart-device ecosystem, which includes several gestures with very high, high and medium agreement rates. Furthermore, there was high consistency within most of the single-user gesture proposals, which reveals that each user developed and applied her/his own mental model about the whole set of interactions with the device ecosystem. Thus, we suggest that mid-air interaction support for smart homes should not only offer a built-in gesture set but also provide for functions of identification and definition of personalized gesture assignments to basic user commands.

Highlights

  • During the last decade, we have experienced an explosion in affordable motion tracking technologies, small enough to be integrated within home appliances or smart environments such as homes, industrial buildings, classrooms, labs, etc

  • This paper presents an elicitation study of upper-body gestures for accessory-free, mid-air control of a smart home device ecosystem, consisting of 7 devices and a total of 55 referents that can be grouped to 14 commands

  • We have investigated a potentially uniform gesture set for any device so that users would not have to remember different gestures for multiple devices in the home

Read more

Summary

Introduction

We have experienced an explosion in affordable motion tracking technologies, small enough to be integrated within home appliances or smart environments such as homes, industrial buildings, classrooms, labs, etc. These devices employ various technologies such as depth cameras, wearable assistive devices, and smartwatches to capture user’s motion and translate it to a mid-air command to a specific device. The identification of appropriate gestures depends on the context of use and, it is an important design issue. A user-centered approach to deal with gesture design is the gesture elicitation (or guessability) study, which is applied by designers to extract appropriate gestures that “meet certain design criteria such as discoverability, ease-of-performance, memorability, or reliability” [2]

Objectives
Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.