Abstract

Achieving the highest levels of repeatability and precision, especially in robot manipulators applied in automation manufacturing, is a practical pose-recognition problem in robotics. Deviations from nominal robot geometry could produce substantial errors at the end effector, which can be more than 0.5 inches for a 6 ft robot arm. In this research, a pose-recognition system is developed for estimating the position of each robot joint and end-effector pose using image processing. To generate the joint angle, the system is developed via the modeling of a pose obtained by combining a convolutional neural network (CNN) and a multi-layer perceptron network (MLP). The CNN categorizes the input image generated by a remote monocular camera and generates a classification probability vector. The MLP generates a multiple linear regression model based on the probability vector generated by a CNN and describes the values of each joint angle. The proposed model is compared with the P-n-Perspective problem-solving method, which is based on marker tracking using ArUco markers and the encoder values. The system was verified using a robot manipulator with four degrees of freedom. Additionally, the proposed method exhibits superior performance in terms of joint-by-joint error, with an absolute error that is three units less than that of the computer vision method. Furthermore, when evaluating the end-effector pose, the proposed method showed a lower average standard deviation of 9mm compared with the computer vision method, which had a standard deviation of 13 mm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call