Abstract

Background : This study aims to explore the design concept generation of intuitive nontouch gesture UX (Airtouch UX) and the application domain for gesture. The process was planned to be performed in both an organized top-down process and a creative bottom-up process. Thus, the idea divergent process of the invisible design problem has been examined in consideration of the task-input -feedback cycle of system interaction. Methods : Prior researches on gesture as an input method form were analyzed to be enhanced with a detailed decomposition model of the gesture interaction framework. Work domain analysis and a creative workshop were conducted with 30 designers focusing on gesture UX design for tablet devices. Five representative tasks were extracted based on context analysis and all designers generated two different types of display code gestures: verbal code-based and spatial code-based gestures. Then, gestures were tested in terms of ease-of-recognition and memory-based response. Results : Verbal code gesture showed a superior recognition rate to spatial code gesture. On the other hand, spatial code was preferred in terms of response than verbal code gesture. Conclusion : The effectiveness and efficiency of visual cue visualization for a gesture interface were suggested through this research. In a future study, we will aim to achieve prototype-based usability verification and conduct an additional gesture-mental model compatibility test.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call