Abstract

A computational system able to automatically and efficiently detect and classify falls would be beneficial for monitoring the elderly population and speed up the assistance proceedings, reducing the risk of prolonged injuries and death. One of the most common problems in such systems is the high number of false-positives in their recognition scheme, which may cause an overload on surveillance system calls. We address this problem by proposing different topologies of a multimodal convolution neural network, which is trained to detect falls based on RGB images and information from accelerometers. We train and evaluate our networks with the UR Fall Detection dataset and UP-Fall dataset, and provide an extensive comparison with state-of-the-art models. Our model reached good results on UR Fall Detection dataset and achieved the state-of-art on UP-Fall detection dataset, relying on easily available sensors to do so, demonstrating it can be a scalable solution for robust fall detection in the real world.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call