Abstract

In this article, a resilient tightly coupled ultra-wideband (UWB) visual–inertial indoor localization system (R-UVIS) is developed to obtain accurate and robust localization performance in complex scenes, even in the case when sensors fail. More specifically, three schemes are designed for the proposed system. First, we introduce the line and image patch features to improve the precision and robustness of the visual features. Besides, we propose accurate loop closure and relocalization methods based on multifeatures to improve the performance of the localization system. Second, we introduce the UWB sensor into the system to suppress the localization drifts in the complex scenes and provide a fixed reference frame. Third, we propose a resilient multisensor fusion method based on an optimization framework to fuse the UWB, visual and inertial measurements in a tightly coupled manner. This data fusion approach improves the robustness of the system, making the localization system seamlessly switch among different localization modes depending on the specific scenes. In addition, an initialization process along with three sensors is also designed for the whole system. We conduct extensive experiments on the public dataset and in real-world scenarios to evaluate the proposed R-UVIS. The experimental results show that the proposed R-UVIS can provide the accurate and robust localization results in a fixed coordinate system for the complicated indoor scenes, even in the case when visual tracking fails or the UWB anchors are unavailable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call