Localization-capable inertial motion capture algorithms rely on zero-velocity updates (ZUPT), usually as measurements in a Kalman filtering scheme, for position and attitude error control. As ZUPTs are only applicable during the static phases a link goes through, estimation errors grow during dynamic ones. This error growth may somewhat be mitigated by imposing biomechanical constraints in multi-sensor systems. Error reduction is also possible by optimization-based methods that incorporate the dynamic and static constraints governing the system behavior over a period of time (e.g. the dynamic network algorithm); when this period includes multiple static phases for a link, its estimation accuracy is greatly improved. The current study enhances the error control capabilities of an existing inertial motion capture algorithm by multi-stage smoothing. The base algorithm benefits from imposing biomechanical constraints and is self-calibrating with respect to body geometry and some sensor parameters. The smoothing process, conducted over the stepping periods of each foot, comprises two stages; Kalman smoothing followed by error minimization by dynamic networks. The performance of the algorithm, deployed using both extended and square-root unscented Kalman filtering schemes (EKF and SRUKF, respectively), is experimentally evaluated during a fast-paced walking test using a custom-made inertial motion capture system. A comparison with an optical motion capture system showed that the proposed method decreased pelvis position and attitude estimation errors by 19% and 29%, respectively. Furthermore, compared to the EKF-based smoothing algorithm, the SRUKF-based method proved to be more successful in error reduction and parameter estimation.
Read full abstract