Abstract
Robot localization is an important task for mobile robot navigation. There are many methods focused on this issue. Some methods are implemented in indoor and outdoor environments. However, robot localization in textureless environments is still a challenging task. This is because in these environments, the scene appears the same in almost every position. In this work, we propose a method that can localize robots in textureless environments. We use Histogram of Oriented Gradients (HOG) and Speeded Up Robust Feature (SURF) descriptors together with Depth information to form a Depth-HOG-SURF multifeature descriptor, which is later used for image matching. K-means clustering is applied to partition the whole feature into groups that are collectively called visual vocabulary. All the images in the database are encoded using the vocabulary. The experimental results show a good performance of the proposed method.
Highlights
Nowadays, robots are becoming more and more common in many applications
Histogram of Oriented Gradients (HOG) and Speeded Up Robust Feature (SURF) features proved to be effective for objectdetection detection [21,22,23]
This section discusses the experimental results of the proposed method: evaluating the success rate using the images in the test set, robot localization, and topological robot navigation
Summary
Robots are becoming more and more common in many applications. Robots serve in hospitals, offices, and the household. In order to accomplish these complicated missions and stay side by side with humans, various issues need to be addressed by researchers Among these problems, robot localization is one of the most fundamental. Vision-based robot localization still needs to address problems regarding lighting conditions and textureless environments. There are not many methods that address the problem of robot localization in textureless environments. Localization in these kinds of environments is still a challenge because the scene at every position is almost the same. Proposed method, we will combine multiple features and depth information to address the problem features and depth information to address the problem of localization in textureless environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.