Abstract

When reading, orthographic information is extracted not only from the word the reader is looking at, but also from adjacent words in the parafovea. Here we examined, using the recently introduced OB1‐reader computational model, how orthographic information can be processed in parallel across multiple words and how orthographic information can be integrated across time and space. Although OB1‐reader is a model of text reading, here we used it to simulate single‐word recognition experiments in which parallel processing has been shown to play a role by manipulating the surrounding context in flanker and priming paradigms. In flanker paradigms, observers recognize a central word flanked by other letter strings located left and right of the target and separated from the target by a space. The model successfully accounts for the finding that such flankers can aid word recognition when they contain bigrams of the target word, independent of where those flankers are in the visual field. In priming experiments, in which the target word is preceded by a masked prime, the model accounts for the finding that priming occurs independent of whether the prime and target word are in the same location or not. Crucial to these successes is the key role that spatial attention plays within OB1‐reader, as it allows the model to receive visual input from multiple locations in parallel, while limiting the kinds of errors that can potentially occur under such spatial pooling of orthographic information.

Highlights

  • When reading, orthographic information is extracted from the word the reader is looking at, and from adjacent words in the parafovea (e.g., Angele, Tran, & Rayner, 2013; Hohenstein, Laubrock, & Kliegl, 2010; Kennedy, 2000; Snell, Vitu, & Grainger, 2017; Vitu, Brysbaert, & Lancelin, 2004)

  • In the present work we examine, using a recently published computational model (Snell, van Leipsig, Grainger, & Meeter, 2018), how orthographic information can be processed in parallel across multiple words, and in particular the key role of spatial attention in such processing

  • In a recent computational model, the OB1-reader model (OB1-Reader, Snell, van Leipsig, et al, 2018), we proposed that spatial attention may prevent the excessive occurrence of illusory conjunctions

Read more

Summary

Introduction

Orthographic information is extracted from the word the reader is looking at, and from adjacent words in the parafovea (e.g., Angele, Tran, & Rayner, 2013; Hohenstein, Laubrock, & Kliegl, 2010; Kennedy, 2000; Snell, Vitu, & Grainger, 2017; Vitu, Brysbaert, & Lancelin, 2004). Math^ot, and Vitu (2014) proposed a “bag-of-bigrams” model in order to account for the results reported by Dare and Shillcock (2013) This involved a simple extension of the Grainger and van Heuven (2003) model of orthographic processing such that processing of location-specific letter identities is performed in parallel across more than one word, and that this information activates a set of location-invariant open bigrams. In a study that made use of the principle that the pupil responds to the brightness of covertly (i.e., without looking) attended locations, Snell, Math^ot, Mirault, and Grainger (2018) observed that the pupil size was contingent with the brightness of the locations of flanking stimuli only in conditions where these impacted target processing (i.e., when they were presented left and right of the target, but not above and below the target) This further strengthens the conception that orthographic integration effects are driven by parallel processing. We will discuss our results and the crucial role of attention in our simulations and in reading

Structure of the model
New assumptions
Modeling single-word recognition paradigms
Flanking letters lexical decision
Priming as a function of prime and target location
Effects of attention and word length
Discussion
Input and letter/bigram activation
Word activation
Attention

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.