Abstract

Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal interfaces (MMIs) have advantages such as improved recognition, faster interaction, and situation-adaptability, over unimodal interfaces. In this study, we developed and evaluated a MMI system with three input modalities; touchscreen, hand-gesture, and haptic to input tactical-level control commands (e.g. lane-changing, overtaking, and parking). We conducted driving experiments in a driving simulator to evaluate the effectiveness of the MMI system. The results show that multimodal HMI significantly reduced the driver workload, improved the efficiency of interaction, and minimized input errors compared with unimodal interfaces. Moreover, we discovered relationships between input types and modalities: location-based inputs-touchscreen interface, time-critical inputs-haptic interface. The results proved the functional advantages and effectiveness of multimodal interface system over its unimodal components for conducting tactical-level driving tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call