Abstract

Accurate indoor positioning has been a difficult problem as the last meter dilemma of localization and navigation due to lack of satellite navigation signals and complex characteristics of multipath and dynamic indoor channels. Indoor visible light positioning (VLP) provides a new possible paradigm for accurate and low-complexity indoor positioning using widely deployed light-emitting diodes (LED). In this paper, the received signal strength of the photodetector at a mobile terminal is utilized to extract the geometric features in order to infer the accurate position coordinate via deep learning. Specifically, a hybrid model, i.e., a convolutional-recurrent neural network (CRNN), is devised to learn the nonlinear mapping from the received signal strength to the position coordinates in the complex indoor visible light propagation environment. A four-dimensional (4D) VLP architecture based on CRNN is formulated to deal with the non-line-of-sight propagation of indoor visible light and different receiver orientation. Simulation results show that the proposed CRNN-based 4D VLP (CR4D-VLP) method can achieve centimeter-level positioning accuracy, and significantly outperforms other state-of-the-art deep-learning-based schemes in both line-of-sight and non-line-of-sight scenarios with various spatial patterns of LEDs and different room sizes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.