Abstract
Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines) and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.
Highlights
The main focus of autonomous mobile-robot navigation research in indoor environment is to allow a mobile robot to navigate without collision
Robots are equipped with many sensors, such as cameras, lasers, infrared sensors, global positioning system (GPS), and ultrasonic sensors, which allow them to perceive their environment
Advances in Mechanical Engineering looked at methods for setting up mobile robots for autonomous navigation task
Summary
The main focus of autonomous mobile-robot navigation research in indoor environment is to allow a mobile robot to navigate without collision. Robots are equipped with many sensors, such as cameras, lasers, infrared sensors, global positioning system (GPS), and ultrasonic sensors, which allow them to perceive their environment. College of Computer and Information Sciences (CCIS), King Saud University, Riyadh, Saudi Arabia. Advances in Mechanical Engineering looked at methods for setting up mobile robots for autonomous navigation task. Autonomous navigation includes several interrelated steps including mapping (Figure 1). Mapping in robotic is the task of constructing a spatial representation of a robot environment. Until the 1990s, the field of robotics mapping was separated into two categories either metric or topological approaches.[1,2]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.