Abstract

A neural-network-based self-organizing control system for a robotic manipulator is presented. The end-effector position and orientation control loop is closed using visual data to generate the necessary control inputs for the manipulator joints by two neural networks. The task considered is to move the manipulator end-effector to a position where an object can easily be gripped. The relations between image data of the object and joint angles of the desired manipulator end-effector position and orientation are clearly nonlinear. The system organizes itself for any manipulator configuration by learning this nonlinear mapping regardless of joint type and geometric dimensions, so that the inverse kinematic solution need not be calculated. A global network learns control signals for larger object distances, and a local network for smaller ones. The generalization ability of the neural networks assures control robustness and adaptability in cases of slightly changed object positions.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.