Abstract

To enable accurate indoor localization at low cost, recent research in visible light positioning (VLP) proposed to employ existing ceiling lights as location landmarks, and use smartphone cameras or light sensors to identify the different lights using statistical visual/optical features. Despite the potential, we find such solutions are unreliable: the features are easily corrupted with a slight rotation of the smartphone, and are not discriminative enough for many practical light models with different size/shape/intensity. In this work, we propose Auto-Litell to resolve these critical challenges and make VLP truly robust. Auto-Litell builds a customized deep-learning neural network model to automatically distill the "invisible" visual features from the lights, which are resilient to smartphone orientation and light models. Moreover, Auto-Litell introduces a Light-CycleGAN to generate "fake" light images to augment the training data, so as to relieve human labors in data collection and labeling. We have implemented Auto-Litell as a real-time localization and navigation system on Android. Our experiments demonstrate Auto-Litell's high accuracy in discriminating the lights in the same building, and high reliability across a variety of practical usage scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.