Abstract

Although reinforcement learning (RL) can solve complex tasks after training, it’s difficult to extend trained agents to target environments with perturbations. This lack of generalization ability has caused the dilemma of large-scale application of RL. Rich neural networks have brought huge improvements to deep learning, but these aren’t adapted to RL and even bring negative effects. Therefore, RL algorithms have few choices of neural network, which greatly limits the representation and generalization ability of RL. To overcome these limitations, we propose a deep semi-dense compression network (DSCN) to improve RL generalization ability. First, we perform a structural extension on a general network model for RL. Then based on the information theory, we propose a semi-dense connection to enhance the neural information flow (NIF) of initial features. Finally, drawing on the ideas of course learning, we propose a channel compression approach to filter the redundant information of the deep network. In addition, we innovatively extend the experimental environment and evaluation metrics of the existing platform, which can fully evaluate the performance of DSCN. The experimental results show that our model achieves stable and significant generalization performance improvement.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.