Abstract
Reinforcement learning (RL) is a double-edged sword: it frees the human trainer from having to provide voluminous supervised training data or from even knowing a solution. On the other hand, a common complaint about RL is that learning is slow. Deep Q-learning (DQN), a somewhat recent development, has allowed practitioners and scientists to solve tasks previously thought unsolvable by a reinforcement learning approach. However DQN has resulted in an explosion in the number of model parameters which has further exasperated the computational needs of Q-learning during training. In this work, an ensemble approach which improves the training time, in terms of the number of interactions with the training environment, is proposed. In the presented experiments, it is shown that the proposed approach improves stability of during training, results in improved average performance, results in more reliable training, and faster learning of features in convolutional layers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Machine Learning and Cybernetics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.