Abstract

There are several large datasets available that are captured from motion tracking systems which could be useful to train wearable human activity recognition (HAR) systems, if only their spatial data could be mapped into the equivalent inertial measurement unit (IMU) data that would be sensed on the body. In this paper, we describe a mapping from 3D Vicon motion tracking data to data collected from a BlueSense on-body IMU. We characterise the error incurred in order to discern the extent to which it is possible to generate useful training data for a wearable activity recognition system from data collected with a motion capture system. We analyse this by mapping Vicon motion tracking data to rotational velocity and linear acceleration at the head, and compare this to actual gyroscope and accelerometer data collected by an IMU mounted on the head. In a 15 min dataset comprising three static activities—sitting, standing and lying down—we find that 95% of the reconstructed gyroscope data is within an error of [−7.25;+7.46] \(deg \cdot s^{-1}\), while 95% of the reconstructed accelerometer data was contained within [−96.1;+72.9] \(m \cdot G\). However, when we introduce more movement by including data collected while walking this increases to [−19.0;+18.2] \(deg \cdot s^{-1}\) for the gyroscope and [−208;+186] \(m \cdot G\) for the accelerometer. We conclude that generating accurate IMU data from motion capture datasets is possible and could be useful in providing larger volumes of data for activity recognition tasks and in helping enable advanced, data-hungry techniques such as deep learning to be employed on a larger scale within the domain of human activity recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call