Abstract

Since reinforcement learning algorithms suffer from the curse of dimensionality in continuous domains, generalization is the most challenging issue in this area. Both skill acquisition and transfer learning are successful techniques to overcome such problem that result in big improvements in agent learning performance. In this paper, we propose a novel graph based skill acquisition method, named GSL, and a skill based transfer learning framework, named STL. GSL discovers skills as high-level knowledge using community detection from connectivity graph, a model to capture not only the agent’s experience but also the environment’s dynamics. STL incorporates skills previously learned from source task to speed up learning on a new target task. The experimental results indicate the effectiveness of the proposed methods in dealing with continuous reinforcement learning problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call