Abstract

Theories of embodied cognition agree that the body plays some role in human cognition, but disagree on the precise nature of this role. While it is (together with the environment) fundamentally engrained in the so-called 4E (or multi-E) cognition stance, there also exists interpretations wherein the body is merely an input/output interface for cognitive processes that are entirely computational.In the present paper, we show that even if one takes such a strong computationalist position, the role of the body must be more than an interface to the world. To achieve human cognition, the computational mechanisms of a cognitive agent must be capable not only of appropriate reasoning over a given set of symbolic representations; they must in addition be capable of updating the representational framework itself (leading to the titular representational fluidity). We demonstrate this by considering the necessary properties that an artificial agent with these abilities need to possess.The core of the argument is that these updates must be falsifiable in the Popperian sense while simultaneously directing representational shifts in a direction that benefits the agent. We show that this is achieved by the progressive, bottom-up symbolic abstraction of low-level sensorimotor connections followed by top-down instantiation of testable perception-action hypotheses.We then discuss the fundamental limits of this representational updating capacity, concluding that only fully embodied learners exhibiting such a priori perception-action linkages are able to sufficiently ground spontaneously-generated symbolic representations and exhibit the full range of human cognitive capabilities. The present paper therefore has consequences both for the theoretical understanding of human cognition, and for the design of autonomous artificial agents.

Highlights

  • In cognitive science, theories that cognition is, in some sense, embodied, can be traced back to two distinct origins (Chemero, 2009): a reaction to the perceived inadequacies of purely computationalist accounts, and a continuation of eliminitavist/anti-representationalist theories of mind

  • The main motivation often stated in robotics research is that such a minimal embodiment is necessary to ground symbols so that they acquire a meaning intrinsic to a cognitive agent as opposed to one that is given by an external observer

  • We argue that reducing the role of the body to such

Read more

Summary

Introduction

Theories that cognition is, in some sense, embodied, can be traced back to two distinct origins (Chemero, 2009): a reaction to the perceived inadequacies of purely computationalist accounts, and a continuation of eliminitavist/anti-representationalist theories of mind. As already stated, we demonstrate that the role of the body – even given this weak interpretation – must necessarily go beyond that of a sensorimotor interface It must go beyond what is required by symbol grounding considerations because it provides and shapes necessary computational mechanisms that cannot be disembodied. We note that this is not an anti-functionalist argument; we merely reject the claim that models that are implemented in a physical agent, but merely use the available sensors and actuators to collect and deliver information for otherwise computational approaches, are in any sense embodied (therein following Ziemke and Thill, 2014 albeit for different reasons). We first presents insights from the literature on biological agents (including humans), and discuss how one might approach this in an artificial agent

In biological agents
In artificial agents
Requirements for representational updating in artificial agents
Noumenal continuity across representational changes
P-A learning
Hierarchical P-A learning
Discussion and conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call