Abstract

Robot localization is a mandatory ability for the robot to navigate the world. Solving the SLAM (Simultaneous Localization and Mapping) allows the robot to both localize itself in the environment while building a map of its surrounding. Vision-based SLAM uses one or more camera as the main source of information. The SLAM involves a large computation load on its own and using vision involves even more complexity that does not scale well. This increasing complexity makes it hard to solve in real-time for applications where the SLAM high rate and low latency are inherent constraints (Advanced Drivers Assistance Systems). To help robots solve the SLAM in real-time we propose to build a vision-core that aims at processing the pixel stream coming from the camera in a vision front-end that let a SLAM method work only with high-level features extracted from the image. This paper describes the implementation on FPGA of a core that computes the BRIEF descriptor from a camera output. We also present the implementation of the correlation method for this descriptor for the tracking in an image sequence. This core is then tested in an embedded SLAM application with good speed-up.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.