Abstract

Simultaneous Localization and Mapping (SLAM) has seen a tremendous interest amongst research community in recent years due to its ability to make the robot truly independent in navigation. The capability of an autonomous robot to locate itself within the environment and construct a map at the same time, it is known as Simultaneous Localization and Mapping (SLAM). They are various sensors that are employed in a Simultaneous Localization and Mapping (SLAM) which characterized either as a laser, sonar and vision sensor. Visual Simultaneous Localization and mapping (VSLAM) is when autonomous robot embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map the environment. Numerous researchers have embarked on the study of Visual Simultaneous Localization and Mapping (VSLAM) with incredible results, however many challenges stills exist. The purpose of this paper is to review the work done by some of the researchers in Visual Simultaneous Localization and Mapping (VSLAM). We conducted a literature survey on several studies and outlined the frameworks, challenges and limitation of these studies. Open issues, challenges and future research in Visual Simultaneous Localization and Mapping (VSLAM) are also discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call