Abstract

Fall detection systems can help providing quick assistance of the person diminishing the severity of the consequences of a fall. Real-time fall detection is important to decrease fear and time that a person remains laying on the floor after falling. In recent years, multimodal fall detection approaches are developed in order to gain more precision and robustness. In this work, we propose a multimodal fall detection system based on wearable sensors, ambient sensors and vision devices. We used long short-term memory networks (LSTM) and convolutional neural networks (CNN) for our analysis given that they are able to extract features from raw data, and are well suited for real-time detection. To test our proposal, we built a public multimodal dataset for fall detection. After experimentation, our proposed method reached 96.4% in accuracy, and it represented an improvement in precision, recall and $F_{1}$-score over using single LSTM or CNN networks for fall detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.