Abstract
Visible light positioning (VLP) is a promising technology since it can provide high-accuracy indoor localization based on the existing lighting infrastructure. Most VLP systems require a prior light-emitting diode (LED) location map, termed a VLP-landmark map in this article, for which manual surveys are costly in practical deployment at scale. What is more, the existing approaches also require dense LED deployments. In this work, we proposed a multisensor fusion framework, termed VWR-simultaneous localization and mapping (SLAM), which tightly fused the VLP, wheel odometer, and red green dlue-depth map (RGB-D) camera to achieve SLAM. Our VWR-SLAM can provide accurate and robust robot localization and navigation in LED shortage/outage situations, meanwhile, constructing the 3-D sparse environment map and the 3-D VLP-landmark map without tedious manual measurements. The experimental results show that our proposed scheme can provide an average robot positioning accuracy of 1.81 cm and an LED mapping accuracy of 3.01 cm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Instrumentation and Measurement
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.