Abstract

It is well-established that human decision making and instrumental control uses multiple systems, some which use habitual action selection and some which require deliberate planning. Deliberate planning systems use predictions of action-outcomes using an internal model of the agent's environment, while habitual action selection systems learn to automate by repeating previously rewarded actions. Habitual control is computationally efficient but are not very flexible in changing environments. Conversely, deliberate planning may be computationally expensive, but flexible in dynamic environments. This paper proposes a general architecture comprising both control paradigms by introducing an arbitrator that controls which subsystem is used at any time. This system is implemented for a target-reaching task with a simulated two-joint robotic arm that comprises a supervised internal model and deep reinforcement learning. Through permutation of target-reaching conditions, we demonstrate that the proposed is capable of rapidly learning kinematics of the system without a priori knowledge, and is robust to (A) changing environmental reward and kinematics, and (B) occluded vision. The arbitrator model is compared to exclusive deliberate planning with the internal model and exclusive habitual control instances of the model. The results show how such a model can harness the benefits of both systems, using fast decisions in reliable circumstances while optimizing performance in changing environments. In addition, the proposed model learns very fast. Finally, the system which includes internal models is able to reach the target under the visual occlusion, while the pure habitual system is unable to operate sufficiently under such conditions.

Highlights

  • Much of the current reinforcement learning (RL) literature implements model-free control

  • We considered three versions of Arbitrated Predictive Actor-Critic (APAC) that represent (a) exclusive habits, (b) exclusive deliberate planning, and (c) arbitration between habit and planning

  • If the arbitrator always selects the action from the inverse model for each step, the APAC becomes an exclusive deliberate planning controller which we call supervised predictive actorcritic (SPAC) (Fard et al, 2017)

Read more

Summary

A Novel Model for Arbitration Between Planning and Habitual Control Systems

Reviewed by: Eiji Uchibe, Advanced Telecommunications Research Institute International (ATR), Japan Yangwei You, Institute for Infocomm Research (A*STAR), Singapore. It is well-established that human decision making and instrumental control uses multiple systems, some which use habitual action selection and some which require deliberate planning. This paper proposes a general architecture comprising both control paradigms by introducing an arbitrator that controls which subsystem is used at any time. This system is implemented for a target-reaching task with a simulated two-joint robotic arm that comprises a supervised internal model and deep reinforcement learning.

INTRODUCTION
THEORETICAL PREMISES
APAC FOR TARGET REACHING
Habit Learning Control System
Internal Models for Planning
Arbitration Between Habitual and Planning Controllers
Experimental Conditions and Environment
RESULTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call