Abstract

In contrast with for example audiovisual speech, the relation between visual and auditory properties of letters and speech sounds is artificial and learned only by explicit instruction. The arbitrariness of the audiovisual link together with the widespread usage of letter–speech sound pairs in alphabetic languages makes those audiovisual objects a unique subject for crossmodal research. Brain imaging evidence has indicated that heteromodal areas in superior temporal, as well as modality-specific auditory cortex are involved in letter–speech sound processing. The role of low level visual areas, however, remains unclear. In this study the visual counterpart of the auditory mismatch negativity (MMN) is used to investigate the influences of speech sounds on letter processing. Letter and non-letter deviants were infrequently presented in a train of standard letters, either in isolation or simultaneously with speech sounds. Although previous findings showed that letters systematically modulate speech sound processing (reflected by auditory MMN amplitude modulation), the reverse does not seem to hold: our results did not show evidence for an automatic influence of speech sounds on letter processing (no visual MMN amplitude modulation). This apparent asymmetric recruitment of low level sensory cortices during letter–speech sound processing, contrasts with the symmetric involvement of these cortices in audiovisual speech processing, and is possibly due to the arbitrary nature of the link between letters and speech sounds.

Highlights

  • The ability to rapidly integrate crossmodal sensations originating from a single object allows efficient and profound perception of our environment

  • Most pronounced in the difference waves at the occipital electrodes (Oz, O1 and O2), a deviant related negativity (DRN) was observed between 150 and 400 ms after stimulus onset, which is in line with previously reported latencies (Pazo-Alvarez et al, 2003; Czigler, 2007)

  • In order to define the time window of interest in an objective manner and to minimize the likelihood of a Type I error (Guthrie and Buchwald, 1991), in the visual experiment a t-test was calculated per condition per time point of the middle occipital electrode (Oz), since in this electrode the visual MMN (vMMN) was expected to be most prominent (Czigler et al, 2004)

Read more

Summary

Introduction

The ability to rapidly integrate crossmodal sensations originating from a single object allows efficient and profound perception of our environment. It has been shown that audiovisual speech processing involves multisensory integration sites as well as low level auditory and visual sensory systems, presumably via feedback projections (Calvert et al, 1999, 2000; Macaluso et al, 2004). Another example of audiovisual integration with which we are daily confronted is that of a basic literacy skill such as letter–speech sound integration. While recent studies revealed that multisensory as well as low level auditory processing are involved during letter–speech sound integration (Hashimoto and Sakai, 2004; Van Atteveldt et al, 2004, 2007a; Blau et al, 2008), the role of low level visual processing is less consistently reported and is the objective of the present study

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call