Abstract

Fall happens when a person's movement coordination is disturbed, forcing them to rest on the ground unintentionally causing serious health risks. The objective of this work is to develop a Multimodal SpatioTemporal Skeletal Kinematic Gait Feature Fusion (MSTSK-GFF) classifier for detecting fall using video data. The walking pattern of an individual is referred to as gait. The event of fall recorded in video shows discrepancies and irregularities in gait patterns. Analysis of these patterns plays a vital role in the identification of fall risk. However, assessment of the gait patterns from video data remains challenging due to its spatial and temporal feature dependencies. The proposed MSTSK-GFF framework presents a multimodal feature fusion process that overcomes these challenges and generates two sets of spatiotemporal kinematic gait features using SpatioTemporal Graph Convolution Network (STGCN) and 1D-CNN network model. These two generated feature sets are combined using concatenative feature fusion process and classification model is constucted for detecting fall. For optimizing the network weights, a bio-inspired spotted hyena optimizer is applied during training process. Finally, performance of the classification model is evaluated and compared to detect fall in videos. The proposed work is experimented with the two vision-based fall datasets namely, UR Fall Detection (URFD) dataset and self-build dataset. The experimental outcome proves the effectiveness of MSTSK-GFF in terms of its classification accuracy of 96.53% and 95.80% with two datasets when compared with existing state-of-the-art techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call