Abstract
Unintentional human falls, especially in seniors, lead to serious injuries, fatalities, and reduced standard of life. Vision-based fall detection methods have demonstrated their usefulness in timely fall response, helping to lessen such injuries. This paper presents an automated vision-based fall detection system that triggers immediate fall reporting. By incorporating human segmentation and image fusion in the pre-processing stage, the system enhances the accuracy of human action classification, thereby ensuring precise fall alerts. It further employs the innovative 4-stream 3D convolutional neural network (4S-3DCNN) model to learn different but consecutive spatial and temporal features. The system processes video input or live surveillance, segmenting human presence every 32 frames using a fine-tuned deep-learning model and applying a three-level image fusion to accentuate movement differences. This technique produces four pre-processed images, input to the 4S-3DCNN model for classification. Consecutive detection of “Falling” and “Fallen” actions triggers an alert for immediate intervention. The original 4S-3DCNN model is an end-to-end trained deep learning model with a fully connected layer serving as a classifier. The research also evaluates the performance of combining the 4S-3DCNN model with Autoencoders and Support Vector Machines (SVM) networks as classifiers. The SVM classifier demonstrated ideal fall detection performance with 100% accuracy using the MCFD, URFD, and Le2i FDD datasets. The proposed system is vital for detecting and preventing falls and reducing healthcare expenses and productivity losses.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.