Abstract

In this paper, we study machine learning methods for recognizing the motion context of a user of an infrastructure-free navigation system. Motion context is information about whether the user is running, crawling, or lying down, for instance. This can be valuable information for the command and control of a tactical or rescue operation, and it can also be used to adapt the positioning algorithm accordingly in order to improve the positioning result. We test our approach in collaboration with the Finnish Defence Forces. With only about 5 min of training data, we are able to detect the users motion context over 93% of the time using a random forest classifier. However, our tests show that the performance of the classifier is highly dependent on the user of the system. For this reason, we experiment with different classification algorithms in order to find a user-independent classifier providing a good compromise between accuracy and computational complexity. With a naive Bayesian classifier, we achieve 85% detection rate in case when the training data is not produced by the user. In addition, we demonstrate how the motion recognition can be used to adjust the zero velocity update threshold in order to improve the performance of a foot-mounted inertial navigation algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call