Abstract

This paper compares the performance of three state-of-the-art visual-inertial simultaneous localization and mapping (SLAM) methods in the context of assisted wayfinding of the visually impaired. Specifically, we analyze their strengths and weaknesses for assisted wayfinding of a robotic navigation aid (RNA). Based on the analysis, we select the best visual-inertial SLAM method for the RNA application and extend the method by integrating with it a method capable of detecting loops caused by the RNA's unique motion pattern. By incorporating the loop closures in the graph and optimization process, the extended visual-inertial SLAM method reduces the pose estimation error. The experimental results with our own datasets and the TUM VI benchmark datasets confirm the advantage of the selected method over the other two and validate the efficacy of the extended method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call