Abstract

Accurate gait events detection from the video would be a challenging problem. However, most vision-based methods for gait event detection highly rely on gait features that are estimated using gait silhouettes and human pose information for accurate gait data acquisition. This paper presented an accurate, multi-view approach with deep convolutional neural networks for efficient and practical gait event detection without requiring additional gait feature engineering. Especially, we aimed to detect gait events from frontal views as well as lateral views. We conducted the experiments with four different deep CNN models on our own dataset that includes three different walking actions from 11 healthy participants. Models took 9 subsequence frames stacking together as inputs, while outputs of models were probability vectors of gait events: toe-off and heel-strike for each frame. The deep CNN models trained only with video frames enabled to detect gait events with 93% or higher accuracy while the user is walking straight and walking around on both frontal and lateral views.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.