Abstract

Abstract A reinforcement learning approach based on modular function approximation is presented. Cerebellar Model Articulation Controller (CMAC) networks are incorporated in the Hierarchical Mixtures of Experts (HME) architecture and the resulting architecture is referred to as HME-CMAC. A computationally efficient on-line learning algorithm based on the Expectation Maximization (EM) algorithm is proposed in order to achieve fast function approximation with the HME-CMAC architecture. The Compositional Q-Learning (CQ-L) framework establishes the relationship between the Q-values of composite tasks and those of elemental tasks in its decomposition. This framework is extended here to allow rewards in non-terminal states. An implementation of the extended CQ-L framework using the HME-CMAC architecture is used to perform task decomposition in a realistic simulation of a two-linked manipulator having non-linear dynamics. The context-dependent reinforcement learning achieved by adopting this approach has advantages over monolithic approaches in terms of speed of learning, storage requirements and the ability to cope with changing goals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call