Abstract

In this paper, we describe FootSLAM, a Bayesian estimation approach that achieves simultaneous localization and mapping for pedestrians. FootSLAM uses odometry obtained with foot-mounted inertial sensors. Whereas existing approaches to infrastructure-less pedestrian position determination are either subject to unbounded growth of positioning error, or require either a priori map information, or exteroceptive sensors, such as cameras or light detection and ranging (LIDARs), FootSLAM achieves long-term error stability solely based on inertial sensor measurements. An analysis of the problem based on a dynamic Bayesian network (DBN) model reveals that this surprising result becomes possible by effectively hitchhiking on human perception and cognition. Two extensions to FootSLAM, namely, PlaceSLAM, for incorporating additional measurements or user provided hints, and FeetSLAM, for automated collaborative mapping, are discussed. Experimental data that validate FootSLAM and its extensions are presented. It is foreseeable that the sensors and processing power of future devices such as smartphones are likely to suffice to position the bearer with the same accuracy that FootSLAM achieves with foot-mounted sensors already today.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.