The low positioning accuracy of industrial robots limits their application in industry. Vision-based kinematic calibration, known for its rapid processing and economic efficiency, is an effective solution to enhance this accuracy. However, most of these methods are constrained by the camera's field of view, limiting their effectiveness in large workspaces. This paper proposes a novel calibration framework composed of monocular vision and computer vision techniques using ArUco markers. Firstly, a robot positioning error model was established by considering the kinematic error based on the Modified Denavit-Hartenberg model. Subsequently, a calibrated camera was used to create an ArUco map as an alternative to traditional single calibration targets. The map was constructed by stitching images of ArUco markers with unique identifiers, and its accuracy was enhanced through closed-loop detection and global optimization that minimizes reprojection errors. Then, initial hand-eye parameters were determined, followed by acquiring the robot's end-effector pose through the ArUco map. The Levenberg-Marquardt algorithm was employed for calibration, involving iterative refinement of hand-eye and kinematic parameters. Finally, experimental validation was conducted on the KUKA kr500 industrial robot, with laser tracker measurements as the reference standard. Compared to the traditional checkerboard method, this new approach not only expands the calibration space but also significantly reduces the robot's absolute positioning error, from 1.359 mm to 0.472 mm.
Read full abstract