Abstract

It has been shown that the provision of a conversational user interface proves beneficial in many domains. But, there are still many challenges when applied in production areas, e.g. as part of a virtual assistant to support workers in knowledge-intensive maintenance work. Regarding input modalities, touchscreens are failure-prone in wet environments and the quality of voice recognition is negatively affected by ambient noise. Augmenting a symmetric text- and voice-based user interface with gestural input poses a good solution to provide both efficiency and a robust communication. This paper contributes to this research area by providing results on the application of appropriate head and one-hand gestures during maintenance work. We conducted an elicitation study with 20 participants and present a gesture set as its outcome. To facilitate the gesture development and integration for application designers, a classification model for head gestures and one for one-hand gestures were developed. Additionally, a proof-of-concept for operators’ acceptance regarding a multimodal conversational user interface with support of gestural input during maintenance work was demonstrated. It encompasses two usability testings with 18 participants in different realistic, but controlled settings: notebook repair (SUS: 82.1) and cutter head maintenance (SUS: 82.7).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call