Abstract
Increasing evidence shows that vision, action and language should not be regarded as a set of disembodied processes. Instead, they form a closely integrated and highly dynamic system that is attuned to the constraints of its bodily implementation as well as to the constraints coming from the world with which this body interacts. One consequence of such embodiment of cognition is that seeing an object, even when there is no intention to handle it, activates plans for actions directed toward it (e.g., Tucker & Ellis, 1998, 2001; Fischer & Dahl, 2007). Using object names induces similar action planning effects as seeing the objects themselves (Tucker & Ellis, 2004; Borghi, Glenberg & Kaschak, 2004). Depending on linguistic context, different object features can be activated for action planning, as indicated by facilitated manual responses or ‘‘affordance effects’’ (e.g., Borghi, 2004; Glenberg & Robertson, 2000; Zwaan, 2004). Similarly, different action intentions direct attention differently to object features for processing (e.g., Bekkering & Neggers, 2002; Fischer & Hoellen, 2004; Symes, Tucker, Ellis, Vainio, & Ottoboni, 2008). Eye movements during visually guided actions shed further light on the close relationship between vision, action and language (Land & Furneaux, 1997; Johansson, Westling, Backstrom, & Flanagan, 2001). For example when humans interact with objects, their eyes move ahead of their hands to support the on-line control of grasping (e.g., Bekkering & Neggers, 2002). These behavioral results are supported by brain imaging studies of object affordances in humans (e.g., Grezes, Tucker, Armony, Ellis, & Passingham, 2003) and single cell recordings in monkeys (e.g., Sakata, Taira, Mine, & Murata, 1992; Fadiga, Fogassi, Gallese, & Rizzolatti, 2000). Together, these behavioral and neuroscientific studies have recently begun to inform computational models of embodied cognition. For example, Tsiotas, Borghi and Parisi (2005) devised an artificial life simulation to give an evolutionary account of some affordance effects, and Caligiore, Borghi, Parisi, and Baldassarre (2010) proposed a computational model to account for several affordance-related effects in grasping, reaching, and language. The neuroscientific constraints implemented in the design of the model allow its authors to investigate the neural mechanisms underlying affordance selection and control. The present special issue brings together recent developments at the intersection between behavioral, neuroscientific, and computational approaches to embodied cognition. Strong support for the close link between vision, action and language comes from studies which highlight how language processing and comprehension make use of neural systems ordinarily used for perception and action (Lakoff, 1987; Zwaan, 2004; Barsalou, 1999; Glenberg & Robertson, 1999; Gallese, 2008; Glenberg, 2010). For example, when humans process the word ‘‘cup’’ they seem to reenact (and therefore internally simulate) many of the perceptual, motor and affective representations related to a cup (Barsalou, 1999). In a similar way sentences and abstract words are understood by creating a simulation of the actions underlying them (Glenberg and Kaschak, 2002; D. Caligiore (&) Laboratory of Computational Embodied Neuroscience, Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (LOCEN-ISTC-CNR), Roma, Italy e-mail: daniele.caligiore@istc.cnr.it
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.