Abstract

Much effort overlaps in designing different hardware to implement different Simultaneous localization and mapping (SLAM) algorithms. In this brief, a reconfigurable architecture with dedicated instruction sets allows the coprocessor to satisfy the pose estimation of a sample class of the SLAM algorithms, feature-based or learning-based methods, which can be decomposed to basic common operations. Furthermore, a memory-reused strategy in instructions was designed to avoid the demand for temporary memory for complex operations. Finally, two parallel computing cores are implemented to perform matrix operations and special computation about the pose estimation in floating-point and fixed-point arithmetic. These contribute to the low hardware resource usage and memory requirement, as illustrated in the experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call