Abstract

This paper proposes a new approach for solving the problem of obstacle avoidance during manipulation tasks performed by redundant manipulators. The developed solution is based on a double neural network that uses Q-learning reinforcement technique. Q-learning has been applied in robotics for attaining obstacle free navigation or computing path planning problems. Most studies solve inverse kinematics and obstacle avoidance problems using variations of the classical Jacobian matrix approach, or by minimizing redundancy resolution of manipulators operating in known environments. Researchers who tried to use neural networks for solving inverse kinematics often dealt with only one obstacle present in the working field. This paper focuses on calculating inverse kinematics and obstacle avoidance for complex unknown environments, with multiple obstacles in the working field. Q-learning is used together with neural networks in order to plan and execute arm movements at each time instant. The algorithm developed for general redundant kinematic link chains has been tested on the particular case of PowerCube manipulator. Before implementing the solution on the real robot, the simulation was integrated in an immersive virtual environment for better movement analysis and safer testing. The study results show that the proposed approach has a good average speed and a satisfying target reaching success rate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.