We propose novel photogrammetry-based robot calibration methods for industrial robots that are guided by cameras or 3D sensors. Compared to state-of-the-art methods, our methods are capable of calibrating the robot kinematics, the hand–eye transformations, and, for camera-guided robots, the interior orientation of the camera simultaneously. Our approach uses a minimal parameterization of the robot kinematics and hand–eye transformations. Furthermore, it uses a camera model that is capable of handling a large range of complex lens distortions that can occur in cameras that are typically used in machine vision applications. To determine the model parameters, geometrically meaningful photogrammetric error measures are used. They are independent of the parameterization of the model and typically result in a higher accuracy. We apply a stochastic model for all parameters (observations and unknowns), which allows us to assess the precision and significance of the calibrated model parameters. To evaluate our methods, we propose novel procedures that are relevant in real-world applications and do not require ground truth values. Experiments on synthetic and real data show that our approach improves the absolute positioning accuracy of industrial robots significantly. By applying our approach to two different uncalibrated UR3e robots, one guided by a camera and one by a 3D sensor, we were able to reduce the RMS evaluation error by approximately 85% for each robot.
Read full abstract