Abstract

Falls are the leading cause of fatal injury in the elderly. Presently available fall-detection devices have many drawbacks including potential blind spots and low lighting, lack of privacy, and the need for the elderly to operate these devices despite cognitive decline. Radio-frequency (RF) imaging presents a promising solution as it is able to traverse through most materials while remaining highly reflective off of humans. FallWatch was designed as an artificial intelligence model to detect falls in real-time in spite of visual obstruction using RF signals while overcoming the drawbacks of RF including low resolution imaging and body-part specularity. Using an RF antenna array, multiple fall and non-fall examples were captured through several mediums of obstruction in cross-person and cross-environment settings. The data obtained was trained on a deep learning model consisting of: 1) Convolutional Neural Network to extract relevant information and capture spatial relationships, 2) Attention Mechanism to allow generalization to new people and environments, and 3) Recurrent Neural Network with Long Short-Term Memory to capture temporal relationships between RF frames. FallWatch was successful in detecting falls not only in through-wall scenarios, but also in cross-person and cross-environment settings while surpassing the performance of other fall detection systems. In conclusion, FallWatch presents a novel end-to-end approach for fall detection in the elderly and enables their monitoring in multiple care settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call