Abstract

Vision-based gait analysis can play an important role in the remote and continuous monitoring of the elderly's health conditions. However, most vision-based approaches compute gait spatiotemporal parameters using human pose information and provide average parameters. This study aimed to propose a straightforward method for stride-by-stride gait spatiotemporal parameters estimation. A total of 160 elderly individuals participated in this study. Data were gathered with a GAITRite system and a mobile camera simultaneously. Three deep learning networks were trained with a few RGB frames as input and a continuous 1D signal containing both spatial and temporal gait parameters as output. The trained networks estimated the stride lengths with correlations of 0.938 and more and detected gait events with F1-scores of 0.914 and more.Clinical relevance- The proposed method showed excellent agreements with the GAITRite system in analyzing spatiotemporal gait parameters. Our approach can be applied to monitor the elderly's health conditions based on their gait parameters for early diagnosis of diseases, proper treatment, and timely intervention.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call