Abstract

Early on during word recognition, letter positions are not accurately coded. Evidence for this comes from transposed-letter (TL) priming effects, in which letter strings generated by transposing two adjacent letters (e.g., jugde) produce large priming effects, more than primes with the letters replaced in the corresponding position (e.g., junpe). Dominant accounts of TL priming effect such as the Open Bigrams model ( Grainger & van Heuven, 2003; Whitney & Cornelissen, 2008) and the SOLAR model ( Davis & Bowers, 2006) explain this effect by proposing a higher level of representation than individual letter identities in which letter position is not coded accurately. An alternative to this is to assume that position coding is noisy (e.g., Gomez, Ratcliff, & Perea, 2008). We propose an extension to the Bayesian Reader ( Norris, 2006) that incorporates letter position noise during sampling from perceptual input. This model predicts “leakage” of letter identity to nearby positions, which is not expected from models incorporating alternative position coding schemes. We report three masked priming experiments testing predictions from this model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call