Abstract

We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four modalities, participants preferred speech interaction in 10 of 15 low-level tasks and direct manipulation for straightforward tasks such as showing a tooltip or selecting. In contrast to previous work, participants most favored unimodal and personal interactions. We identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh/?view only = 34bfd907d2ed43bbbe37027fdf46a3fa.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call