Abstract

Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.

Highlights

  • The capacity of humans to simultaneously process information from separate sources is inherently limited see[1,2]

  • Errors between stimulus onset asynchrony (SOA) and spatial configuration (F (3,18:8)~ 19:4; pv0:0001) indicating that the difference in saccadic reaction time (SRT) between the coincident and disparate condition diminishes over increasing SOA

  • There was a significant interaction between participants and SOA (F (18,19:8)~ 4:6; pv0:01) pointing to individual differences in the SRT-SOA curves

Read more

Summary

Introduction

The capacity of humans to simultaneously process information from separate sources is inherently limited see[1,2] This limitation is conspicuous in a traffic situation: the act of driving is a highly complex skill requiring the sustained monitoring of perceptual - predominantly visual - and cognitive inputs [3,4]. Recent developments of driver assistance systems, like frontcollision warning or lane-change assistance systems, are aimed at alleviating the human workload. Some of these systems present their information on the windshield using of visual overlays (‘‘head-up display’’ technologies) presenting yet another source of information to be processed by the driver

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call