The main contribution of this paper is the proposal of a six-degree-of-freedom (6-DoF) refueling robotic arm positioning and docking technology guided by RGB-D camera visual guidance, as well as conducting in-depth research and experimental validation on the technology. We have integrated the YOLOv8 algorithm with the Perspective-n-Point (PnP) algorithm to achieve precise detection and pose estimation of the target refueling interface. The focus is on resolving the recognition and positioning challenges of a specialized refueling interface by the 6-DoF robotic arm during the automated refueling process. To capture the unique characteristics of the refueling interface, we developed a dedicated dataset for the specialized refueling connectors, ensuring the YOLO algorithm’s accurate identification of the target interfaces. Subsequently, the detected interface information is converted into precise 6-DoF pose data using the PnP algorithm. These data are used to determine the desired end-effector pose of the robotic arm. The robotic arm’s movements are controlled through a trajectory planning algorithm to complete the refueling gun docking process. An experimental setup was established in the laboratory to validate the accuracy of the visual recognition and the applicability of the robotic arm’s docking posture. The experimental results demonstrate that under general lighting conditions, the recognition accuracy of this docking interface method meets the docking requirements. Compared to traditional vision-guided methods based on OpenCV, this visual guidance algorithm exhibits better adaptability and effectively provides pose information for the robotic arm.
Read full abstract