Abstract

Visible light positioning (VLP) is a promising technology since it can provide high-accuracy indoor localization based on the existing lighting infrastructure. Most VLP systems require a prior light-emitting diode (LED) location map, termed a VLP-landmark map in this article, for which manual surveys are costly in practical deployment at scale. What is more, the existing approaches also require dense LED deployments. In this work, we proposed a multisensor fusion framework, termed VWR-simultaneous localization and mapping (SLAM), which tightly fused the VLP, wheel odometer, and red green dlue-depth map (RGB-D) camera to achieve SLAM. Our VWR-SLAM can provide accurate and robust robot localization and navigation in LED shortage/outage situations, meanwhile, constructing the 3-D sparse environment map and the 3-D VLP-landmark map without tedious manual measurements. The experimental results show that our proposed scheme can provide an average robot positioning accuracy of 1.81 cm and an LED mapping accuracy of 3.01 cm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call