Abstract

In this paper, a vision-based robot-to-robot relative pose estimation method is proposed for markerless wheeled mobile robots using an RGB-D camera. The proposed method comprises three parts: (i) object detection using a machine vision technique with RGB data, (ii) relative position estimation averaging over the point cloud with outlier rejection, and (iii) relative orientation estimation using point cloud scan matching through iterative closest point (ICP) methods. The neighboring or target robots are detected using YOLOv3-tiny so that the position of the robot can be determined in pixel coordinates. To estimate the 3D position of the target robot in the camera coordinates of the ego-robot, the object detection results are used along with the depth information and the precalibrated pinhole camera model. To estimate the orientation of the target robot, a point cloud is generated using the depth image, downsampled, and then used for scan matching by using ICP methods. For experimental verification, the proposed relative pose estimation method was implemented as a robot operating system package and tested on a small-scale differential-driving wheeled mobile robot, TurtleBot3; the estimation results were compared with the ground truth in real-world environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call