Abstract

It is now commonly accepted that orthographic information influences spoken word recognition in a variety of laboratory tasks (lexical decision, semantic categorization, gender decision). However, it remains a hotly debated issue whether or not orthography would influence normal word perception in passive listening. That is, the argument has been made that orthography might only be activated in laboratory tasks that require lexical or semantic access in some form or another. It is possible that these rather “unnatural” tasks invite participants to use orthographic information in a strategic way to improve task performance. To put the strategy account to rest, we conducted an event-related brain potential (ERP) study, in which participants were asked to detect a 500-ms-long noise burst that appeared on 25% of the trials (Go trials). In the NoGo trials, we presented spoken words that were orthographically consistent or inconsistent. Thus, lexical and/or semantic processing was not required in this task and there was no strategic benefit in computing orthography to perform this task. Nevertheless, despite the non-linguistic nature of the task, we replicated the consistency effect that has been previously reported in lexical decision and semantic tasks (i.e., inconsistent words produce more negative ERPs than consistent words as early as 300 ms after the onset of the spoken word). These results clearly suggest that orthography automatically influences word perception in normal listening even if there is no strategic benefit to do so. The results are explained in terms of orthographic restructuring of phonological representations.

Highlights

  • There is an accumulating amount of evidence in favor of the idea that learning to read and write changes the way people process and/or represent spoken language

  • If the existence of the orthographic consistency effect resulted from strategic activation induced by the constraints of singleword lexical and semantic tasks, no such effect should have been found in a noise detection task as neither lexical access nor semantic processing were required to perform the task

  • The fact that the orthographic consistency effect was replicated in an implicit passive listening condition supports the idea that the effects of orthography, at least the “early” ones associated with an enhanced event-related brain potential (ERP) amplitude for inconsistent words around 300 ms, are automatic and non-strategic in nature

Read more

Summary

Introduction

There is an accumulating amount of evidence in favor of the idea that learning to read and write changes the way people process and/or represent spoken language. The influence of orthography on spoken language was found in both lexical and semantic tasks (Ziegler and Ferrand, 1998; Peereman et al, 2009) In these experiments, the consistency of the sound–spelling mapping was manipulated: inconsistent words whose phonology can be spelled in multiple ways (e.g.,/ait/can be spelled ITE, IGHT, or YTE) were typically found to take longer to process than consistent words whose phonology can only be spelled one way (/∧k/is always spelled UCK). The consistency of the sound–spelling mapping was manipulated: inconsistent words whose phonology can be spelled in multiple ways (e.g.,/ait/can be spelled ITE, IGHT, or YTE) were typically found to take longer to process than consistent words whose phonology can only be spelled one way (/∧k/is always spelled UCK) This consistency effect was taken as a marker for the activation of orthography in spoken language. In these paradigms, orthographic information is never presented explicitly, that is, participants hear spoken words and they are totally unaware of any orthographic manipulation

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call