Abstract

This paper compares two approaches to modelling orthographic processing, the Self-Organising Lexical Acquisition and Recognition (SOLAR; Davis, 1999, in press) and the Sequential Encoding Regulated by Inputs to Oscillating Letter units (SERIOL; Whitney, 2001, 2004) models, following up on a previous analysis by Whitney (2008). I provide a brief overview of the SOLAR model, and its key similarities to and differences from the SERIOL model, focusing in particular on the different mechanisms underlying the formation of the positional gradient in the two models. I also discuss the neural implementation of the SOLAR model's lexical matching algorithm, and its plausibility. In the final part of the paper I discuss empirical attempts to adjudicate between the two models, focusing on the masked form priming paradigm, as well as the use of theoretical match values to test model predictions. It is concluded that the SOLAR model provides an account of visual word identification that is neurally plausible and that succeeds in explaining critical orthographical similarity data, but that the SERIOL model does not satisfy these constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call