Abstract

For naive robots to become truly autonomous, they need a means of developing their perceptive capabilities instead of relying on hand crafted models. The sensorimotor contingency theory asserts that such a way resides in learning invariants of the sensorimotor flow. We propose a formal framework inspired by this theory for the description of sensorimotor experiences of a naive agent, extending previous related works. We then use said formalism to conduct a theoretical study where we isolate sufficient conditions for the determination of a sensory prediction function. Furthermore, we also show that algebraic structure found in this prediction can be taken as a proxy for structure on the motor displacements, allowing for the discovery of the combinatorial structure of said displacements. Both these claims are further illustrated in simulations where a toy naive agent determines the sensory predictions of its spatial displacements from its uninterpreted sensory flow, which it then uses to infer the combinatorics of said displacements.

Highlights

  • Autonomous robots need possess the cognitive capabilities to face realistic and uncertain environments

  • There lie two contributions of this article: while the formalism is used to formally describe why and how spatial coherence lead to the discovery of sensory prediction very much like alluded to in Laflaquière (2017) and how this sensory prediction encodes spatial structure akin to that of Terekhov and O’Regan (2016) and Laflaquière and Ortiz (2019), it does so with a greater emphasis put on the precise relations between the algebraic structures at play and with much weakened assumptions about a priori capabilities of the agent, much closer to those put forward in O’Regan and Noë (2001). We argue that this formalism unifies and extends those found in previous works; that the formal structures its expressive power makes explicit give a conceptual explanation of results previously achieved by more complex means in experimental contexts (Ortiz and Laflaquière, 2018; Laflaquière and Ortiz, 2019); and on a somewhat “philosophical” level that it allows for a clearer picture of the applicability and function of SMCT in the process of bootstrapping perception via its systematic distinction of points of view

  • The proposed formalism has been assessed in simulation as a proof of concept, with a naive agent able to (i) build for each of its actions some permutation matrices associated to its own sensory array, and (ii) exploit them to structure its own set of actions

Read more

Summary

Introduction

Autonomous robots need possess the cognitive capabilities to face realistic and uncertain environments. Said models are notoriously difficult to obtain (Lee et al, 2017), by definition incomplete (Nguyen et al, 2017) and often fail to generalize to interactions varying in unknown spatial and temporal scales. As it has been previously studied, models of an agent sensorimotor apparatus (Censi and Murray, 2012) or of a mobile robot interaction with its environment (Jonschkowski and Brock, 2015) can alternatively be learned. These capabilities crucially depend on the robot correctly learning its perception in that it represents the interface layer between the raw readings of its sensors and its higher level cognitive capabilities, e.g., decision making or task solving layers

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call