Abstract

Inspired by algorithms utilized in inertial navigation, an inertial motion capturing algorithm capable of position and heading estimation is introduced. The fusion algorithm is capable of real-time link geometry estimation, which allows for the imposition of biomechanical constraints without a priori knowledge regarding sensor placements. Furthermore, the algorithm estimates gyroscope and accelerometer bias, scaling, and non-orthogonality parameters in real-time. The stationary phases of the links, during which pseudo-measurements such as zero velocity or heading stabilization updates are applied, are detected using optically trained neural networks with buffered accelerometer and gyroscope data as inputs. The estimated position and heading in each link are subject to a limited amount of error relative to the traveled distance, even though the algorithm does not utilize magnetic or external position measurements. The performance of the algorithm, deployed using both extended and square-root unscented Kalman filtering schemes (EKF and SRUKF, respectively), is experimentally evaluated during a fast-paced walking test using a custom-made inertial motion capture system. Comparison with an optical motion capture system shows that, compared to the EKF, the SRUKF performs up to 3 times and 40% better in terms of position and attitude estimation root mean square error (RMSE), respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call