Abstract

Robotic control via reinforcement learning (RL) has made significant advances. However, a serious weakness with this method is that RL models are prone to overfitting and have poor transfer performance. Transfer in reinforcement learning means that only a few samples are needed to train policy networks for new tasks. In this paper we investigate the problem of learning transferable policies for robots with serial structures, such as robotic arms, with the help of graph neural networks (GNN). The GNN was previously employed to incorporate explicitly the robot structure into the policy network, and thus make the policy easier to be generalized or transferred. Based on a kinematics analysis particularly on the serial robotic structure, in this paper we further improve the policy network by proposing a weighted information aggregation strategy. The experiment is conducted in a few-shot policy learning setting on a robotic arm. The experimental results show that the new aggregation strategy significantly improves the performance not only on the learning speed, but also on the policy accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.