Abstract
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
Highlights
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived
A process of optimally weighted summation does not just allow for perceptual decisions to be dominated by diverse sensory modalities, it allows for enhanced sensitivity, relative to when information is available from just one sensory modality
Find that cross modal sensitivities are enhanced relative to the most precise unimodal sensitivity displayed by each participant. In both experiments we find that probability summation better describes performance on congruent bimodal trials, relative to precision weighted summation predictions, neither account accurately describes group-level performances
Summary
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. The brain uses these estimates when it sums the initially independent sensory codes together to form an integrated code This process is often referred to as an optimally weighted summation[5]. A process of optimally weighted summation does not just allow for perceptual decisions to be dominated by diverse sensory modalities, it allows for enhanced sensitivity, relative to when information is available from just one sensory modality This is true even if the two sensory modalities provide precise sensory estimates.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.