Abstract

Accurate gait events detection from the video would be a challenging problem. However, most vision-based methods for gait event detection highly rely on gait features that are estimated using gait silhouettes and human pose information for accurate gait data acquisition. This paper presented an accurate, multi-view approach with deep convolutional neural networks for efficient and practical gait event detection without requiring additional gait feature engineering. Especially, we aimed to detect gait events from frontal views as well as lateral views. We conducted the experiments with four different deep CNN models on our own dataset that includes three different walking actions from 11 healthy participants. Models took 9 subsequence frames stacking together as inputs, while outputs of models were probability vectors of gait events: toe-off and heel-strike for each frame. The deep CNN models trained only with video frames enabled to detect gait events with 93% or higher accuracy while the user is walking straight and walking around on both frontal and lateral views.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call