Abstract

Simultaneous Localization and Mapping (SLAM) has seen a tremendous interest amongst research community in recent years due to its ability to make the robot truly independent in navigation. The capability of an autonomous robot to locate itself within the environment and construct a map at the same time, it is known as Simultaneous Localization and Mapping (SLAM). Visual Simultaneous Localization and Mapping (VSLAM) is when autonomous robot employs a vision sensor such a camera to explore the environment. Various researchers have embarked on the study of Visual Simultaneous Localization and Mapping (VSLAM) with excellent results, however the challenge of environmental noise such as light intensity still persist. In this study we propose a framework for Visual Simultaneous Localization and Mapping (VSLAM) that will address the challenge of light intensity in an environment in order to improve the performance of Visual Simultaneous Localization and Mapping (VSLAM) system. In executing of Visual Simultaneous Localization and Mapping (VSLAM) system, we have introduced a filtering algorithm to reduce or limit the effects of noise on images taken from the environment. The outcome of our study is a framework that will enable an autonomous robot to successfully navigate, localize itself and map the environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call