Abstract

Most visual simultaneous localization and mapping (SLAM) algorithms rely on geometric features, such as points, lines, and planes, and these algorithms treated all features as equally important in the optimization process, ignoring the semantic information of these features. However, the robust and rich semantic features should play a more important role, which is similar to the mechanism of visual saliency or visual attention. Therefore, this article aims to mimic this mechanism in the visual SLAM framework by using the saliency prediction model. To solve the center bias of the generic salient dataset, we proposed a method for recomputing saliency map by considering both geometric and semantic information. Then, we proposed the salient bundle adjustment (SBA) algorithm by using the value of saliency prediction map as the weight of the feature points in the traditional bundle adjustment (BA) approach. Finally, exhaustive experiments conducted with the state-of-the-art algorithm show that our proposed algorithm outperforms existing algorithms such as direct sparse odometry (DSO) and oriented brief (ORB)-SLAM3 in both indoor and outdoor environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call