Abstract

Deep reinforcement learning (DRL) has been proven effective in learning policies of high-dimensional states and actions. Recently, a variety of robot manipulation tasks have been accomplished using end-to-end DRL strategies. An end-to-end DRL strategy accomplishes a robot manipulation task as a black box. On the other hand, a robot manipulation task can be divided into multiple subtasks and accomplished by non-learning-based approaches. A hybrid DRL strategy integrates DRL with non-learning-based approaches. The hybrid DRL strategy accomplishes some subtasks of a robot manipulation task by DRL and the rest subtasks by non-learning-based approaches. However, the effects of integrating DRL with non-learning-based approaches on the learning speed and the robustness of DRL to model uncertainties have not been discussed. In this study, an end-to-end DRL strategy and a hybrid DRL strategy are developed and compared in controlling a cable-driven parallel robot. This study shows that, by integrating DRL with non-learning-based approaches, the hybrid DRL strategy learns faster and is more robust to model uncertainties than the end-to-end DRL strategy. This study demonstrates that, by taking advantages of both learning and non-learning-based approaches, the hybrid DRL strategy provides an alternative to accomplish a robot manipulation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call