Abstract

Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.