Abstract

Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4–11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality—they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory information on each trial.

Highlights

  • Imagine you are at an academic conference

  • Development of Audio-Visual Integration in illusions: if a single visual flash occurs at the same time as two beeps, we sometimes perceive two flashes. This is because auditory information is generally more reliable than vision for judging when things happen; it dominates our audio-visual percept for temporal tasks

  • We asked children and adults to report the number of visual flashes or auditory beeps when these were presented simultaneously

Read more

Summary

Introduction

Imagine you are at an academic conference. You confidently answer ‘3’; you were able to combine information from audition and vision, having both seen and heard the incident. We often receive information about the same object or event from multiple sensory modalities that we can integrate to improve the precision of our perceptual estimates. We integrate multisensory information for a variety of spatial and temporal tasks, such as judging the size, location, number or duration of objects or events [1,2,3,4,5]. A key benefit of this integration is that uncertainty, or variance (random noise) in the combined, multisensory estimate is reduced, relative to either of the component uni-sensory estimates, see e.g. A key benefit of this integration is that uncertainty, or variance (random noise) in the combined, multisensory estimate is reduced, relative to either of the component uni-sensory estimates, see e.g. [6]

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.