Abstract
Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS), which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.
Highlights
A number of potential markets are slowly emerging for mobile robotic systems
The system was evaluated using the Microsoft Robotics Studio’s (MRS) simulation environment. The results of these experiments reveal that the proposed architecture provides a safe and efficient method for indoor navigation based on a mapless navigation strategy
This paper describes a novel behaviour‐based architecture for mobile robot image‐based navigation
Summary
Entertainment applications and household or office assistants are the primary targets in this area of development These types of robots are designed to move around within an often highly unstructured and unpredictable environment. The algorithm, in which applications include object detection, robot navigation 3‐D modelling, video/image tracking and gesture recognition, was proposed by Lowe [10]. Lowes [10] patented method can robustly identify objects even among clutter and under partial occlusion because the most notable improvements provided by SIFT are invariance to image scaling, rotation and partially invariant to change in illumination and 3‐D camera viewpoint. The SIFT algorithm is utilized to find correspondence between the goal and reference images to generate control parameters for the robot
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have