Abstract

Despite lacking a generally accepted definition, Artificial General Intelligence (AGI) is commonly understood to refer to artificial agents possessing the capacity to build up a context-independent understanding of itself and the world and to generalize this knowledge across a multitude of contexts. In human agents, this capacity is, to a large degree, facilitated by processes of self-directed learning, during which agents voluntarily control the conditions under which episodes of learning and problem solving occur. Since self-directed learning depends on the degree of knowledge the agent has about various aspects of themselves (their bodily skills, their learning goal, etc.), an AGI implementation of this type of learning must build on a theory of how this self-knowledge is actualized and modified during the learning process. In this paper, we employ the pattern theory of self in order to characterize different aspects of an agent’s self that are relevant for self-directed learning. Such aspects include agent-internal cognitive states such as thoughts, emotions, and intentions, but also relational states such as action possibilities in the environment. Combinations of these aspects form a characteristic pattern, which is unique to each individual agent, with no one aspect being necessary or sufficient for the individuation of that agent’s self. Here, we focus on the interdependence of narrative and embodied aspects of the self-pattern, since they involve particularly salient challenges consisting in conceptualizing the interaction between propositional and motor representations.In our paper, we model the reciprocal interaction of these aspects of the self-pattern within an individual cognitive agent. We do so by extending an approach by Ryan, Agrawal, & Franklin (2020), who laid the groundwork for the implementation of the pattern theory of self in the LIDA (Learning Intelligent Decision Agent) model. We describe how embodied and narrative aspects of an agent’s self-pattern are realized by patterns of interaction between different LIDA modules over time, and how interactions at multiple temporal scales allow the agent’s self-pattern to be both dynamically variable and relatively stable. Finally, we investigate the implications this view has for the creation of artificial agents that can benefit from self-directed learning, both in the context of deliberate planning and adaptive motor execution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call