Abstract
Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
Highlights
One of the oldest issues in cognitive psychology concerns the mental representation of meaning
We report complementary F1 and F2 analyses using only Action and Visual for the Condition factor to verify that the two main effects of interest differ in time course
Different Time Courses of Activation: Action Precedes Visual Feature Activation The results show that words referring to manipulable objects can elicit action priming effects, as reported in the object representation literature (e.g., Ellis and Tucker, 2000; for a review, see Martin, 2007)
Summary
One of the oldest issues in cognitive psychology concerns the mental representation of meaning. In the past decade, embodied theories of language, postulating that language meaning is stored in modality-specific brain areas, have gained in popularity and empirical support. The meaning of the word “grasp” activates some of the neural areas involved in planning and performing everyday grasping actions (e.g., Hauk et al, 2004; Rueschemeyer et al, 2007), while comprehension of the word “red” entails activation of parts of the neural visual pathway (e.g., Simmons et al, 2007; van Dam et al, 2012). Despite much research important questions remain unanswered. One of these is when, and to what end, modality-specific information becomes activated during language comprehension
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have