Abstract

In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience.

Highlights

  • In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct denotes

  • The lexicality effect is pervasive across a variety of psycholinguistic tasks, with the key ones that have informed the development of models of written and spoken word recognition including letter/phoneme identification, visual/auditory lexical decision and reading aloud/repetition

  • If we see lexicality effects with closely matched nonwords that are always accompanied by significant effects of semantic variables such as imageability, this suggests that lexical representations could potentially be reduced to semantic information

Read more

Summary

Why do we need lexical representations?

The existence of some form of lexical representation is inferred when a behavioural processing advantage emerges for a familiar string of letters or phonemes (e.g., DOG) over a novel string (e.g., POG), which is termed the lexicality effect. Harm and Seidenberg’s (2004) model included a transparent set of semantic representations (where each unit did correspond to an underlying feature), as did Chang, Lambon Ralph, Furber, and Welbourne’s (2013) model All of these models were able to discriminate between words (e.g., BRAIN) and closely matched nonwords that are homophonic with real words (i.e., pseudohomophones like BRANE) effectively, albeit via different metrics. In Chang et al.’s (2013) model, the decision was made on the basis of pooled activation over orthographic, phonological and semantic units This latter model has simulated the larger semantic effects are seen in lexical decision performance for words when presented in a difficult relative to an easy foil context. Semantic information clearly contributes to the representation of lexical items

Lexical versus semantic representations
Lexical representations as attractors
Specificity of lexical representations
Localisation of lexical representations
Activation of lexical representations
Development of lexical representations
Conclusions and future directions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call