Abstract
In highly automated vehicles, passengers can engage in non-driving-related activities. Additionally, the technical advancement allows for novel interaction possibilities such as voice, gesture, gaze, touch, or multimodal interaction, both to refer to in-vehicle and outside objects (e.g., thermostat or restaurant). This interaction can be characterized by levels of urgency (e.g., based on late detection of objects) and cognitive load (e.g., because of watching a movie or working). Therefore, we implemented a Virtual Reality simulation and conducted a within-subjects study with N=11 participants evaluating the effects of urgency and cognitive load on modality usage in automated vehicles. We found that while all modalities were possible to use, participants relied on touch the most. This was followed by gaze, especially for external referencing. This work helps to further understand multimodal interaction and the requirements this poses on natural interaction in (automated) vehicles.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the ACM on Human-Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.