Abstract

AbstractEffective decision-making in an uncertain world requires making use of all available information, even if distributed across different sensory modalities, as well as trading off the speed of a decision with its accuracy. In tasks with a fixed stimulus presentation time, animal and human subjects have previously been shown to combine information from several modalities in a statistically optimal manner. Furthermore, for easily discriminable stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, multimodal reaction times are typically faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. However, due to a lack of adequate ideal observer models, it has remained unclear whether subjects perform optimal cue combination when they are allowed to choose their response times freely.Based on data collected from human subjects performing a visual/vestibular heading discrimination task, we show that the subjects exhibit worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria, which relate multimodal discrimination performance to sensitivity in the unimodal conditions. Furthermore, multimodal reaction times are slower than those predicted by a parallel race model, opposite to what is commonly observed for easily discriminable stimuli.Despite violating the standard criteria for optimal cue combination, we show that subjects still accumulate evidence optimally across time and cues, even when the strength of the evidence varies with time. Additionally, subjects adjust their decision bounds, controlling the trade-off between speed and accuracy of a decision, such that they feature correct decision rates close to the maximum achievable value.

Highlights

  • Animal and human subjects have previously been shown in tasks with fixed stimulation presentation time to combine information from several modalities in a statistically optimal manner

  • For highly salient stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, subjects feature multimodal reaction times faster than predicted from unimodal conditions when assuming independent races for each modality

  • Based on data collected from human subjects performing a visual/vestibular cue integration heading discrimination task, we show that the subjects appear to feature worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria from their behavior in the unimodal conditions

Read more

Summary

Introduction

Animal and human subjects have previously been shown in tasks with fixed stimulation presentation time to combine information from several modalities in a statistically optimal manner. For highly salient stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, subjects feature multimodal reaction times faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. Due to lack of adequate ideal observer models, it remained unclear if subjects maintain optimal cue combination as soon as they are allowed to choose their response times freely.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call