Abstract

In this paper, we propose a novel visual learning framework for developmental robotics agents which mimics the developmental learning concept from human infants. It can be applied to an agent to autonomously perceive depths by simultaneously developing its visual sensory representation, eye movement control, and depth representation knowledge through integrating multiple visual depth cues during self-induced lateral body movement. Based on the active efficient coding theory (AEC), a sparse coding and a reinforcement learning are tightly coupled with each other by sharing a unify cost function to update the performance of the sensory coding model and eye motor control. The generated multiple eye motor control signals for different visual depth cues are used together as inputs for the multi-layer neural networks for representing the given depth from simple human-robot interaction. We have shown that the proposed learning framework, which is implemented on the Hoap-3 humanoid robot simulator, can effectively learn to autonomously develop the sensory visual representation, eye motor control, and depth perception with self-calibrating ability at the same time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.