Abstract
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.