Abstract

To enable accurate indoor localization at low cost, recent research in visible light positioning (VLP) proposed to employ existing ceiling lights as location landmarks, and use smartphone cameras or light sensors to identify the different lights using statistical visual/optical features. Despite the potential, we find such solutions are unreliable: the features are easily corrupted with a slight rotation of the smartphone, and are not discriminative enough for many practical light models with different size/shape/intensity. In this work, we propose Auto-Litell to resolve these critical challenges and make VLP truly robust. Auto-Litell builds a customized deep-learning neural network model to automatically distill the "invisible" visual features from the lights, which are resilient to smartphone orientation and light models. Moreover, Auto-Litell introduces a Light-CycleGAN to generate "fake" light images to augment the training data, so as to relieve human labors in data collection and labeling. We have implemented Auto-Litell as a real-time localization and navigation system on Android. Our experiments demonstrate Auto-Litell's high accuracy in discriminating the lights in the same building, and high reliability across a variety of practical usage scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call