Abstract

This work isdevoted to estimate the position and the orientation of the robot (this robot carries a camera) based on the images captured by this camera. This method includes two stages: learning and navigation. At the learning stage, the environment is represented by a cloud of points (3D points) before navigation, each 3D point is associated with a vector of 32 bytes describes the keypoint using ORB descriptor. At the navigation stage, a matching 2D/3D is made between the keypoints extracted from the 2D query image and the cloud of 3D points to estimate the position and orientation of the camera. A new classification for methods using vision-based navigation is presented and an algorithm based on Structure from Motion (SFM) method to estimate location of camera is proposed. This algorithm uses FAST detector to extract keypoints and uses binary local feature ORB descriptor to describe keypoints. These descriptors are compared using hamming distance for matching between corresponding points, so this algorithm is suitable for real time application. With practical implementation in indoor environment, the obtained position error is less than 5cm, whereas the angle error is less than 1.5 degrees.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.