Abstract

Falls are a pervasive problem facing elderly populations, associated with significant morbidity and mortality. Prompt recognition of falls, especially in elderly people with cognitive or physical impairments who cannot raise the alarm themselves, is a challenge. To this end, wearable sensors can be used to detect fall behaviour, including smartwatches and wristbands. These devices are limited by their intrusiveness, require user compliance and have issues around endurance and comfort, reducing their effectiveness in elderly populations. They can also only target patients already recognised as falls risks, and cannot apply to non-identified patients. Leveraging state of the art AI deep learning, we introduce two types of automated fall detection techniques using visual information from cameras: 1) self-supervised autoencoder, distinguishing falls from normal behaviour as an anomaly detection problem, 2) supervised human posture-based fall activity recognition. Five models are trained and evaluated based on two publicly available video datasets, composed of activities of daily living and simulated falls in an office-like environment. To test the models for real-world fall detection, we developed two new datasets, including videos of real falls in elderly people, and more complex backgrounds and scenarios. The experimental results show autoencoder detectors are able to predict falls directly from images where the background is pre-learned. While the pose-based approach uses foreground body pose only for AI learning, better targeting complex scenarios and backgrounds. Video-based methods could be a potential for low-cost and non-invasive falls detection, increasing safety in care environments, while also helping elderly people retain independence in their own homes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call