Abstract

The main focus of this research is to develop an autonomous robot capable of self-navigation in an unknown environment. The proposed system performs autonomous navigation primarily based on the following visually perceived information: 1) Range Estimation: A novel variable single/multi baseline omnidirectional stereovision system with an option to automatically select the baseline that is adjusted to the environment with the establishment of stereo correspondences and triangulation offloaded to the Graphics Processing Unit (GPU). Additionally, as a safety measure, a low level reactive obstacle avoidance system using the disparity maps returned from a Bumblebee stereo camera range system provides a secondary source of pseudo range estimation to steer the mobile robot away from obstacles which the primary system has failed to detect. The two sensors complement one another due to the vertical stereo setup for the primary sensor and the horizontal stereo setup for the secondary sensor, where the former is more sensitive towards horizontal features whereas the latter is better for vertical features. 2) Motion Estimation: A 3DoF visual odometry system combining distance travelled estimated by a ground plane optical flow tracking system, with bearing estimated by the panoramic visual compass system. 3) Place Recognition: An appearance-based place recognition system using image signatures created from Haar decomposed omnidirectional images for loop closure detection. These components were integrated together into the mobile robot's navigation system which balances its effort amongst loop closing and exploration, decides its next course of action, performs path planning and executing the selected path. As the mobile robot engages the environment, the positional drift associated to the mobile robot's estimated location increases over time, thus, making it necessary to perform loop closing regularly by detecting it via the place recognition system and maintaining the global consistency of its internal representation of the environment (in the form of a topological map) by employing a relaxation technique. Due to the importance of performing loop closing regularly, an active loop closure detection and validation system, that enables the mobile robot to actively search for loop closures and to validate ambiguous loop closures, was proposed, developed and validated. A wide variety of experiments were conducted to verify and evaluate the performance of the entire system at both the system and subsystem levels. All experimental results were compared against ground truth where possible. Fully autonomous experiments combining all the above were conducted in indoor, semi-outdoor and outdoor environments. In addition, semi-autonomous experiments were conducted where the mobile robot, provided with a priori information in the form of a topological map built on a separate occasion in an offline manner, was required to reach a user specified destination (goal oriented). Finally, the proposed place recognition system was applied to the map merging problem where experimental results showed the improved robustness of loop closure and map merging detection when fused with a laser-based metric SLAM system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.