Abstract
Research toward unmanned mobile robot navigation has gained significant importance in the last decade due to its potential applications in the location-based services industry. The increase in construction of large space indoor buildings has made difficulty for humans to operate within such environments. In this study, a mobile robot's indoor navigation algorithm is developed with vision cameras. Using two monocular cameras (one looking forward and one looking downward), the developed algorithms make use of the salient features of the environments to estimate rotational and translational motions for real-time positioning of the mobile robot. At the same time, an algorithm based on artificial landmark recognition is developed. The artificial landmark is shaped arrow based signboards with different colors representing different paths. These algorithms are integrated into a designed framework for mobile robot real-time positioning and autonomous navigation. Experiments are performed to validate the designed system using the mobile robot PIONEER P3-AT. The developed algorithm was able to detect and extract artificial landmark information up to 3 m distance for the mobile robot guidance. Experiment results show an average error of 0.167 m deviation from the ideal path, signified the good ability and performance of the development autonomous navigation algorithm.
Highlights
Autonomous navigation for a ground-based mobile robot has become more and more desirable in these recent years in both indoor and outdoor environments
This paper proposed an alternative solution for vision-based navigation in indoor environments using two cameras arranged in a unique way
This paper aimed to address the potential of vision sensors to accurately estimate the real time position and to guide a mobile robot throughout navigation at indoor environments
Summary
Autonomous navigation for a ground-based mobile robot has become more and more desirable in these recent years in both indoor and outdoor environments. While monocular vision failed to operate in complete unknown environment (Zhang et al, 2014), stereovision tends to be heavier in computation with limited range (Huang, 2013; Hong et al, 2012) and vision aided with inertial sensor is the most costly configuration with delays (Hesch et al, 2013) Despite these limitations, monocular vision appears to be the most suitable candidate for a vision-based navigation solution since it is able to provide richness of information for a high level of intelligence with a lower cost sensor (Ye et al, 2012).
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have