Abstract

Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8°) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.