Abstract

This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.