Abstract
A computational system able to automatically and efficiently detect and classify falls would be beneficial for monitoring the elderly population and speed up the assistance proceedings, reducing the risk of prolonged injuries and death. One of the most common problems in such systems is the high number of false-positives in their recognition scheme, which may cause an overload on surveillance system calls. We address this problem by proposing different topologies of a multimodal convolution neural network, which is trained to detect falls based on RGB images and information from accelerometers. We train and evaluate our networks with the UR Fall Detection dataset and UP-Fall dataset, and provide an extensive comparison with state-of-the-art models. Our model reached good results on UR Fall Detection dataset and achieved the state-of-art on UP-Fall detection dataset, relying on easily available sensors to do so, demonstrating it can be a scalable solution for robust fall detection in the real world.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.