Abstract

Pavlenko argues that contemporary models of the bilingual lexicon (e.g., Kroll & De Groot, 1997) confuse word meanings and concepts. A new approach to concepts in bilingual memory is advocated in which meanings and concepts have separate representations. ‘‘The evidence for a distinction between word meanings and concepts comes from the study of aphasia: it has been demonstrated that global and paroxysmal aphasics exhibit a complete loss of language (lack of production and comprehension) in the presence of self-regulated and communicative behavior, based on wellcontrolled non-linguistic conceptual representations’’. For example, the patients may be able to tell the difference between a cat and a dog but producing or understanding the words ‘‘cat’’ and ‘‘dog’’ is impossible. According to Pavlenko, such findings suggest that word meanings and concepts have separate representations in the brain (cf. Paradis, 1997). In this commentary, I argue that the assumption of separate representations for meanings and concepts is not required by the aphasia data ‐ in fact, the standard account of global aphasia and anomia does not make this distinction (e.g., Caplan, 1992). Furthermore, the findings on bilingual performance do not require the separation either. Instead, a single, conceptual level suffices and provides an even better account of the available evidence. I lay out my arguments using the WEAVER++ model of word production (Roelofs, 1992, 1993; Levelt, Roelofs, & Meyer, 1999a) but they hold for most ‘‘one-level’’ models in the literature. WEAVER++ is a model for monolingual word production in which conceptual representations also code word meanings. So, if Pavlenko is right, the model should have great difficulty accounting for the patient data and it should be hard to extend the model to bilingual production. In the model, a distinction is made between conceptual preparation, lemma retrieval, and word-form encoding. During conceptual preparation, a speaker decides on the conceptual information to be verbally expressed, called the ‘‘message’’ concepts. In lemma retrieval, a message concept is used to retrieve a lemma from memory, which is a representation of the syntactic properties of a word, crucial for its use in sentences. For example, an English verb lemma specifies the word’s syntactic class and what kind of complements the word takes. A verb lemma also contains morphosyntactic slots for the specification of tense, aspect, mood, person, and number. The slots are given values using information from the message or are set by agreement. So, it is certainly not the case that a one-level model ‘‘narrows the scope of investigation to lexicalized concepts only, making it impossible to entertain any other kind, such as grammaticized concepts (encoded morphosyntactically)’’ as claimed by Pavlenko. A noun lemma specifies the syntactic class, has a number slot (for count nouns), and, for languages like Dutch, French, and German, specifies the grammatical gender. Lemma retrieval makes these properties available for syntactic encoding processes. In word-form encoding, the lemma information and the morphosyntactic slot values are used to retrieve the appropriate form properties from memory in order to construct an articulatory program. Information about words is represented in a network

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call