Abstract
Guiding the action selection mechanism of an autonomous agent for learning control behaviors is a crucial issue in reinforcement learning. While classical approaches to reinforcement learning seem to be deeply dependent on external feedback, intrinsically motivated approaches are more natural and follow the principles of infant sensorimotor development. In this work, we investigate the role of incremental learning of predictive models in generating curiosity, an intrinsic motivation, for directing the agent's choice of action and propose a curiosity-driven reinforcement learning algorithm for continuous motor control. Our algorithm builds an internal representation of the state space that handles the computation of curiosity signals using the learned predictive models and extends the Continuous-Actor-Critic-Learning-Automaton to use extrinsic and intrinsic feedback. Evaluation of our algorithm on simple and complex robotic control tasks shows a significant performance gain for the intrinsically motivated goal reaching agent compared to agents that are only motivated by extrinsic rewards.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.