Abstract

We use motion context recognition to enhance the result of our infrastructure-free indoor navigation algorithm. Target applications are difficult navigation scenarios such as first responder, rescue, and tactical applications. Our navigation algorithm uses inertial navigation and visual navigation fusion. Random Forest classifier algorithm is taught with training data from Inertial Measurement Unit and visual navigation data to classify between walking, running and climbing. This information is used both in pedestrian navigation to do stationarity detection with adaptive threshold and in particle filter fusion to exclude visual data from during climbing. Methods are evaluated in an indoor navigation test where person wearing tactical equipment moves through a building. Results show improvement of positioning accuracy based on loop closure error on the test track especially when the movement is fast paced. The loop closure error was reduced on average 4 % in two tests when movement was slow and 14 % when movement was fast.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call