Abstract

Ambiguous words are processed more quickly than unambiguous words in a lexical decision task despite the fact that each sense of an ambiguous word is less frequent than the single sense of unambiguous words of equal frequency or familiarity. In this computer simulation study, we examined the effects of different assumptions of a fully recurrent connectionist model in accounting for this processing advantage for ambiguous words. We argue that the ambiguity advantage effect can be accounted for by distributed models if (a) the least mean square (LMS) error-correction algorithm rather than the Hebbian algorithm is used in training the network and (b) activation of the units representing the spelling rather than the meaning is used to index word recognition times. An important advantage of computational models is that the underlying assumptions of the model must be explicitly formulated. This explicit formulation allows comparison of assumptions that are highly similar. In some cases, virtually identical assumptions can give rise to qualitative differences rather than merely quantitative differences. In this article, we consider just such a situation. Two different connectionist learning algorithms lead to opposite predictions about the time required to recognize ambiguous and unambiguous words when activation of the spelling units is used as the index of word recognition. In particular, ambiguous words are incorrectly predicted to have a processing disadvantage compared with unambiguous words when the Hebbian learning algorithm is used, but correctly predicted to have a processing advantage when the least mean square (LMS) error-correction algorithm is used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call