Abstract

The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model.

Highlights

  • Identifying a printed word requires a reader to encode the input stimulus, match this input code against stored lexical representations, and select the best matching candidate from among the tens of thousands of words in the reader’s vocabulary

  • Target words preceded by RI primes were responded to 4 ms faster than targets preceded by orthographic control primes, a difference that was not significant in either the subjects (p = .492) or the items analysis (p = .769; both Fs < 1.0)

  • The results of Experiment 1, using a conventional masked priming methodology, showed a null effect of RI primes. This pattern of results is problematic for the Bayesian Reader model, which predicts strong priming for the RI primes used in this experiment

Read more

Summary

Introduction

Identifying a printed word requires a reader to encode the input stimulus, match this input code against stored lexical representations, and select the best matching candidate from among the tens of thousands of words in the reader’s vocabulary. Contemporary research on reading and visual word recognition seeks to better understand these coding, matching and selection processes. A critical question regarding orthographic input coding concerns the nature of the fundamental units that allow access to lexical information. Do these units represent individual letters, or larger letter clusters, such as bigrams? The obvious follow-up question would be, how is order information represented across the encoded units? With respect to this second question, one might wonder whether accurately encoding order information really is critical for visual word identification. (1) It deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be at the rghit pclae.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call