Abstract

The present study compared the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing. Following an auditory prime, a picture (or blank frame) was briefly presented and then immediately masked. The participants had to judge whether or not a picture had been presented. Naturalistic sounds consistently elicited a cross-modal semantic priming effect on visual sensitivity (d') for pictures (higher d' in the congruent than in the incongruent condition) at the 350-ms rather than at the 1,000-ms stimulus onset asynchrony (SOA). Spoken words mainly elicited a cross-modal semantic priming effect at the 1,000-ms rather than at the 350-ms SOA, but this effect was modulated by the order of testing these two SOAs. It would therefore appear that visual picture processing can be rapidly primed by naturalistic sounds via cross-modal associations, and this effect is short lived. In contrast, spoken words prime visual picture processing over a wider range of prime-target intervals, though this effect was conditioned by the prior context.

Highlights

  • The present study compared the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing

  • Further evidence comes from an event-related potential (ERP) study: When a spoken word led a target picture by around 1,670 ms, the P1 component associated with the picture occurred earlier in the congruent than in the incongruent condition, but no such congruency effect was induced by naturalistic sounds (Boutonnet & Lupyan, 2015)

  • Mask 1,500 ms Internal response ďincongruent was estimated on the basis of 96 trials (48 pictureabsent trials × 2 blocks; see Table 1); d' values were calculated based on the hit and false alarm (FA) rate, and submitted to a three-way analysis of variance (ANOVA) with the factors of congruency, prime type, and stimulus onset asynchrony (SOA)

Read more

Summary

Participants

Forty volunteers (10 males, mean age 22.2 years) took part in this experiment in exchange for course credit or five pounds (UK sterling). The participants were native English speakers or bilinguals who had started to learn English by 5 years of age. The visual stimuli were presented on a 23-inch LED monitor controlled by a personal computer. The auditory stimuli (8 bit mono; 22500 Hz digitization) were presented over closed-ear headphones and ranged in loudness from 31 to 51 dB sound pressure level (SPL). The spoken words consisted of the most commonly agreed-upon name used to refer each picture (Bates et al 2003; Snodgrass & Vanderwart, 1980) and were produced by a female native English speaker. The naturalistic sound and the spoken word associated with the same picture were edited to have the same duration. The root mean square values of all of the auditory stimuli were equalized

Design
Procedure
Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.