Abstract

A lack of floor plans is a fundamental obstacle to ubiquitous indoor location-based services. Recent work have made significant progress to accuracy, but they largely rely on slow crowdsensing that may take weeks or even months to collect enough data. In this paper, we propose Knitter that can generate accurate floor maps by a single random user’s one hour data collection efforts, and demonstrate how such maps can be used for indoor navigation. Knitter extracts high quality floor layout information from single images, calibrates user trajectories, and filters outliers. It uses a multi-hypothesis map fusion framework that updates landmark positions/orientations and accessible areas incrementally according to evidences from each measurement. Our experiments on three different large buildings (up to <inline-formula><tex-math notation="LaTeX">$140\times 50\;\mathrm{m}^2$</tex-math></inline-formula> ) with 30+ users show that Knitter produces correct map topology, with landmark location errors of <inline-formula><tex-math notation="LaTeX">$3\sim 5\;\mathrm{m}$</tex-math></inline-formula> and orientation errors of <inline-formula><tex-math notation="LaTeX">$4\sim 6^\circ$</tex-math></inline-formula> , both at 90-percentile. Our results are comparable to the state-of-the-art at more than <inline-formula><tex-math notation="LaTeX">$20\times$</tex-math></inline-formula> speed up: data collection in each of the three buildings can finish in about one hour even by a novice user trained just a few minutes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call