Abstract

Deep Reinforcement Learning (DRL) combines the benefits of Deep Learning and Reinforcement Learning. However, it still requires long training times and a large number of instances to reach an acceptable performances. Transfer Learning (TL) offers an alternative to reduce the training time of DRL agents, using less instances and possibly improving performance. In this work, we propose a transfer learning formulation for DRL across tasks. Relevant source tasks are selected considering the action spaces and the Wasserstein distances of an output in a hidden layer of a convolutional neural network. Rather than transferring the whole source model, we propose a method for selecting only relevant kernels based on their entropy values, which results in smaller models that can produce better performances. In our experiments we use Deep QNetworks (DQN) with Atari games We evaluated the proposed method with dierent percentages of selected kernels and show that we can obtain similar performances than DQN in less nteractions and with smaller models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call