Abstract

Visual Simultaneous Localization and Mapping (vSLAM) is used for mobile robots to perform positioning and mapping tasks in an unknown environment by utilizing camera sensors. However, when mobile robots are moving in a low-texture area, the point-based vSLAM cannot obtain accurate trajectories and maps due to lack of sufficient feature matching. To solve those problems, edge features in the scene can be fused with point features. This paper presents a visual odometry using edge features and point features, which adaptively selects the feature extraction strategy. The proposed approach can be applied to a composite environment which includes both low-texture area and rich-texture area. In feature detection phase, the adaptive strategy detects the richness of image features and selects the appropriate feature extraction method by setting a threshold.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call