Abstract
Wireless localization method can take advantage of the existing Wi-Fi infrastructures to obtain indoor position information. However, an essential wireless localization bottleneck is the intensive geo-tagging, calibration and updating work of the collected wireless data. In this case, this study develops a vision-aided self-calibration method for wireless localization without manual supervision. This method localizes a pedestrian using a visual coordinate transformation model (V-CTM). The visual location is matched with the simultaneously collected wireless data to automatically generate training data for a wireless propagation model. A wireless self-calibration scheme is proposed to self-train the model using a Bi-LSTM and a principle of separate training. Experiments have demonstrated that the accuracy of the generated training dataset was 0.2 m using the V-CTM. The localization experiment achieved an accuracy of 2 m using the constructed propagation model. Meanwhile, the time and labour consumption for model calibration and updating can be significantly reduced.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.