Abstract
Many algorithms were developed to perform visual localization and mapping (SLAM) for robotic applications. These algorithms used monocular or stereovision systems to solve constraints related to the navigation in unknown or dynamic environment. The requirement of SLAM systems in terms of processing time and precision is a factor that limits their use in many embedded applications like UAVs or autonomous vehicles. Meanwhile, trends towards low-cost and low-power processing require massive parallelism on hardware architectures. The emergence of recent heterogeneous embedded architectures should help design embedded systems dedicated to Visual SLAM applications. It was demonstrated in a previous work that bio-inspired algorithms are competitive compared to classical methods based on image processing and environment perception. This paper is a study of a bio-inspired SLAM algorithm with the aim of making it suitable for an implementation on a heterogeneous architecture dedicated for embedded applications. An algorithm-architecture adequation approach is used to achieve a workload partitioning on CPU-GPU architecture and hence speeding up processing tasks.
Highlights
The robot navigation is not always possible in some special circumstances, due to the unavailability of a map or because the environment keeps changing
Eyes represented by cameras are used to provide inputs data for front-end operations, but images are processed in a different way compared to classic methods
One of the advantages of Rat-SLAM navigation system is its ability to run in dynamic environment using cheap camera sensor, due to the nature of Rat retina that has a low Visual acuity (VA) compared to the human one [3]
Summary
The robot navigation is not always possible in some special circumstances, due to the unavailability of a map or because the environment keeps changing. Bio-inspired approaches are based on learning concepts from nature and applying them to design an enhanced real time SLAM system Most of these algorithms are aiming to simulate the biological retina and brain-based methods for features detection and description, which make the model complex and its parallelization a real challenge. Eyes represented by cameras are used to provide inputs data for front-end operations, but images are processed in a different way compared to classic methods. It can be categorized into simple eyes with one concave photoreceptor lens like for humans, and compound eyes like for some insects with multiple lenses [2]. Calibration and other algorithm parameters were adapted to each dataset to always have the maximum performance
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have