Abstract

This paper describes a brain-inspired simultaneous localization and mapping (SLAM) system using oriented features from accelerated segment test and rotated binary robust independent elementary (ORB) features of RGB (red, green, blue) sensor for a mobile robot. The core SLAM system, dubbed RatSLAM, can construct a cognitive map using information of raw odometry and visual scenes in the path traveled. Different from existing RatSLAM system which only uses a simple vector to represent features of visual image, in this paper, we employ an efficient and very fast descriptor method, called ORB, to extract features from RGB images. Experiments show that these features are suitable to recognize the sequences of familiar visual scenes. Thus, while loop closure errors are detected, the descriptive features will help to modify the pose estimation by driving loop closure and localization in a map correction algorithm. Efficiency and robustness of our method are also demonstrated by comparing with different visual processing algorithms.

Highlights

  • Animals have an instinctive ability to explore and navigate in an unknown space

  • We applied oriented FAST and rotated BRIEF (ORB) feature extraction approach into RatSLAM system to build a cognitive map for a mobile robot

  • We compared the performance of different visual processing methods such as Scale-invariant feature transform (SIFT) approach and existing feature extraction methods[19, 20] for both RGB and RGB-D signals

Read more

Summary

Introduction

Animals have an instinctive ability to explore and navigate in an unknown space. Inspired by the spatial cognition of animals, in the past decades, many researchers were investigating how animals perceive, store and maintain spatial knowledge[1−7]. Experimental results[7] evidenced that hippocampus can receive all sensory neocortical information from EC due to connections between EC and hippocampus These biological findings motivated researchers in robotics area to build a cognitive map by a process integrating activity from both the grid cells and place cells in the EC-hippocampal area[8−11]. Rodent can update the pose by fusing the external visual information and the neural activities of place cells and grid cells These behaviors are similar to the prediction and update process in SLAM. ORB can provide good performance with low cost computation, it is suitable to construct a semi-dense map in real-time[37] Inspired by these implementations of ORB and RatSLAM, ORB is adopted to replace RatSLAM algorithm s vision processing method in this paper.

System overview
Visual processing
Local view cells
CAN dynamics
Path integration
Cognitive map construction
Experiment result
Feature extraction
Cognitive map
Pose cell activity
Global localization
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.