Abstract

Large language models can handle sophisticated natural language processing tasks. This raises the question of how their understanding of semantic meaning compares to that of human beings. Supporters of embodied cognition often point out that because these models are trained solely on text, their representations of semantic content are not grounded in sensorimotor experience. This paper contends that human cognition exhibits capabilities that fit with both the embodied and artificial intelligence approaches. Evidence suggests that semantic memory is partially grounded in sensorimotor systems and dependent on language-specific learning. From this perspective, large language models demonstrate the richness of language as a source of semantic information. They show how our experience with language might scaffold and extend our capacity to make sense of the world. In the context of an embodied mind, language provides access to a valuable form of ungrounded cognition.This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.