Abstract

Interactive Reinforcement Learning (IntRL) allows human teachers to accelerate the learning process of Reinforcement Learning (RL) robots. However, IntRL has largely been limited to tasks with discrete-action spaces in which actions are relatively slow. This limits IntRL's application to more complicated and challenging robotic tasks, the very tasks that modern RL is particularly well-suited for. We seek to bridge this gap by presenting Continuous Action-space Interactive Reinforcement learning (CAIR): the first continuous action-space IntRL algorithm that is capable of using teacher feedback to out-perform state-of-the-art RL algorithms in those tasks. CAIR combines policies learned from the environment and the teacher into a single policy that proportionally weights the two policies based on their agreement. This allows a CAIR agent to learn a relatively stable policy despite potentially noisy or coarse teacher feedback. We validate our approach in two simulated robotics tasks with easy-to-design and - understand heuristic oracle teachers. Furthermore, we validate our approach in a human subjects study through Amazon Mechanical Turk and show CAIR out-performs the prior state-of-the-art in Interactive RL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call