Abstract
Much research using deep reinforcement learning has been conducted to make robot manipulators perform certain tasks with little prior knowledge of the work environment. However, due to the high-dimension continuous action-state spaces that a robot manipulator must work in, it is difficult for deep reinforcement learning agents to find or even learn optimal policies. In this paper, we present a method for generating optimal paths for the end effector of a robot manipulator to move through. This is achieved by using images of the workspace that allow us to avoid the problems of high dimensionality while the system learns pick-and-place tasks using deep reinforcement learning. The robot simulator Webots was used to implement the robot model in a virtual work environment. Simulations confirmed that the robot manipulator is able to successfully move target objects from their starting points to their destinations while avoiding moving obstacles in real time, even if the size, position, and number of obstacles are varied.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEIE Transactions on Smart Processing & Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.