Abstract

BackgroundDifferent sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process.Principal FindingsHere we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation.ConclusionOur data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

Highlights

  • Researchers often refer to multi-sensory integration, but some evidence cited for this is inconclusive

  • Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and subsequently combined

  • Two types of observation are often taken as evidence, subjective reports concerning changed perceptual content [1,2,3,4,5,6] and changes in the precision of perceptual decisions [7,8,9]

Read more

Summary

Introduction

Researchers often refer to multi-sensory integration, but some evidence cited for this is inconclusive. Subjective reports could change because sensory integration has taken place This could happen because the provision of additional information disposes the observer to report a particular outcome. The latter possibility could be described as a decisionlevel sensory interaction – it does not necessitate integration. Different sources of sensory information can interact, often shaping what we think we have seen or heard This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. Slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. Still greater improvements are possible if the two sources of information are encoded via a common physiological process

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call