Abstract

The presence of robots in our daily life is becoming more common, where robots start carrying out more complex tasks. This increase in the complexity of tasks makes conventional control system insufficient. Therefore, a plausible approach is required for robots to learn how to perform these tasks. Reinforcement learning enables robots to perform complex tasks without highly engineered control systems. However, using reinforcement learning in robotic applications is challenged by several problems such as high dimensionality. Thus, in this paper, we study the performance of the Hindsight Experience Replay (HER) algorithm which addresses the high dimensionality problem. In this paper, we analyze the algorithm performance using a simulated robotic arm to pick and place different objects. Then, we propose the use of vision feedback which is used to control the gripper of the robotic arm. The results and analysis highlights some of HER limitations when dealing with objects that have limited grasping points. Our proposed method allows the robotic arm to pick objects using the same trained policy without the need to retrain the agent for new objects. Finally, we prove that using our method the robotic arm can pick the objects with higher success rate compared to the one without vision feedback.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.