Abstract
Embodiment of conceptual knowledge is one of the most influential ideas to sweep the landscape of cognitive psychology in recent decades [1,2] and as the review by van Elk, van Schie, and Bekkering [6] makes clear, it has deeply affected related subfields as well. We have substantial evidence to support the proposal that sensory and motor representations can affect perceptual and cognitive experiences and that these representations are likely to be integrated in some way with conceptual knowledge. The hard work of distinguishing the functional and epiphenomenal contributions that sensorimotor representations make to processes such as object identification and language comprehension must now begin in earnest [3]. Toward that end, van Elk et al. have provided a framework for understanding some of the contextual influences that govern the involvement of action representations in conceptual processing. They suggested that this framework generates testable predictions regarding the distinction between multimodal and modality-specific representations and the hierarchical control of action. Genuine predictions are very hard to come by when working with frameworks rather than computational models, and what van Elk et al. deliver actually turns out to be suggestions for research questions that might be addressed rather than clear predictions about the results that experiments should produce. Such caution is wise because it is becoming clear that what might be seen as clear and straightforward predictions about the nature and function of action representations often fail when put to empirical test (see [4] for examples involving mental simulation and language comprehension). A striking case of failed expectations regarding action representations comes from a single-cell recording study of hand movements in monkeys [5]. Animals were trained to move their hand forward and to the right or left to contact an illuminated target. They did so while viewing a video image of their hand and the indicated target, rather than viewing the hand and target directly. Distinct populations of neurons in premotor cortex fired when the hand moved to the left versus the right. Rather than coding hand movements directly, however, these populations were actually coding a higher level action goal. This fact was cleverly revealed when the experimenters displayed on the monkey’s monitor not a literal image of their moving hand, but rather a mirror image. Thus, a rightward movement of the hand effected a leftward movement of the image of the hand on the monitor. To capture a target appearing on the left side of the monitor, therefore the monkey was required to make a physical movement of the hand to the right. Under
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.