Abstract

Simultaneous Localization and Mapping (SLAM) has attracted much consideration within the research group in recent years because of its potential to make mobile robots genuinely independent. Visual Simultaneous Localization and Mapping (VSLAM) arise when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a VSLAM system to be ineffective. In this study, we have introduced an Image Filtering Algorithm into the VSLAM method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) SLAM algorithm for localization and mapping and A*(star) algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a sixty percent landmark or landfeature detection of the total landmark or landfeature within a simulation environment. The inclusion of an Image Filtering Algorithm has enabled the VSLAM system to navigate in a noisy environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call