Abstract

Event Abstract Back to Event A Curious Robot Vision System Pramod Chandrashekhariah1 and Jochen Triesch1* 1 Frankfurt Institute for Advanced Studies (FIAS), Germany The ability of biological organisms to learn autonomously is a challenge for future robots assisting humans in their homes, work places, or other changing environments [1-3]. Autonomous learning implies that the system can choose what activities it should engage in and hence what it can learn about. Here we introduce an autonomously learning robot vision system implemented on the iCub robot that explores a scene and learns visual object representations without any human assistance. The system integrates a number of visual competences including attention, stereoscopic vision, segmentation, tracking, model learning, and recognition. Our system is driven by an attention mechanism that endows it with a simple form of curiosity: the system will preferentially attend to objects, for which it estimates that it can still learn something new. The basic mode of operation of the system is as follows. An attention mechanism combines bottom-up (saliency based) attention with top-down information about object familiarity and previously visited locations. It drives the system to look at interesting points in the scene. At each location, an object candidate is segmented from the background using stereo information and actively tracked if it is moving. A new object model, containing information about the spatial arrangement of local image features, is created if the object is novel. Otherwise the existing model for the object is updated. The system continues to look at and learn about an object as long as the object model can still be improved by acquiring additional features from the object. When the learning progress for the current object drops below a threshold, attention is diverted to a new location, where the process repeats. Our experiments comparing our “curious” attention system to several alternatives show greater learning speed and higher recognition accuracy when learning is focused on objects and locations where the learning progress is expected to be high. Acknowledgements Supported by the BMBF through the Bernstein Focus: Neurotechnology Frankfurt.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.