Abstract

Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at https://github.com/Peter-zds/SDME.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call