Abstract

Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an “object” is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object’s affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared “tactile” sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being “grip-able.” The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot’s actions, while other parts of the environment that are perceived, such as walls and floors, are ignored.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.