Abstract

Indoor localization is becoming an emerging requirement in many large shopping malls. Existing indoor localization systems, however, require exhausted system bootstraps and calibration phases. The huge sunk cost usually hinders practical deployment of the indoor localization systems in large shopping malls. In contrast, we observe that floor-plan images of large shopping malls, which highlight the positions of many shops, are widely available in Google Maps, Gaode Maps, Baidu Maps, etc. According to several observed shops, people can localize themselves (called self-localization). However, due to the requirements of geometric sense and space transformation, not all people get used to this way. In this article, we propose EyeLoc, which uses smartphone vision to enable accurate self-localization on a floor-plan image. EyeLoc addresses several challenges, including developing a ubiquitous smartphone vision system, extracting efficient vision clues, and achieving robust measurement error mitigation. We implement EyeLoc in Android and evaluate its performance in emulated environment, two large shopping malls and a semioutdoor large Outlets. The results show that the 90-percentile errors of localization and heading direction are 5.97 m and 20° in 70 000 m <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> malls.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call