Autonomous navigation is a research field that gives mobile robots the capacity to perform various tasks without human assistance. Autonomous navigation based on visual sensors can be used in GPS denied environments. Vision-based navigation performs feature detection, matching, and pose estimation using camera images. This paper presents a new approach to autonomous navigation for mobile robots using a Color-based Image Segmentation and Centroid Detection algorithm instead of traditional feature detection algorithms. The algorithm intentionally matches features across images using known feature points and uses conventional techniques such as Epipolar Geometry and Perspective-N-Points algorithms for camera pose estimation. The study includes camera calibration to estimate intrinsic parameters and their physical unit conversion ensuring accurate measurements. Experimental datasets are used to analyze the performance of the Epipolar geometry, Perspective-3-Point, and Efficient-Perspective-n-Point based pose estimation algorithms. To enhance accuracy, the P3P algorithm is modified to consider combinations of image points for pose estimation. The paper concludes with a comparative analysis between the original P3P algorithm and the modified version, providing valuable insights into their respective performances. Overall, the paper aims to provide a comparison of different pose estimation algorithms used for visual navigation. Keywords: pose estimation, visual navigation, camera calibration, perspective-n-point, efficient pnp, epipolar geometry Abbreviations PnP, perspective-n-point; P3P, perspective-3-point; EPnP, efficient perspective-n-point; SLAM, simultaneous localization and mapping; VO, visual odometry; GPS, global positioning system; SURF, speeded up robust features; SIFT, scale-invariant feature transform; EKF, extended kalman filter; RGB, red-green-blue; HSV, hue- saturation-value; CNN, convolutional neural network
Read full abstract