This paper explores the crucial role of embodiment in learning representations for space topology in robotics. Embodiment, the ability of an agent to interact with its environment and receive sensory feedback, is fundamental to developing accurate and efficient representations. In this work, we investigate this by applying an action-conditional prediction algorithm to data collected from a simulated environment, aiming to learn the topology of the environment through sequences of random interactions. Using a simple mobile-robot-like scenario, by leveraging sensory-motor interactions we demonstrate how the agent can discover the topology of its environment. Our results demonstrate the importance of embodiment in the development of representations and potential applicability in robotic tasks, and a simple but effective method of integrating actions into a learning loop. We suggest that building abstract representations through the use of action-conditional prediction is a step towards unification of the representations used in robotics.
Read full abstract