Abstract

Falls are one of the greatest risks for older adults living alone at home. This paper presents a novel visual-based fall detection approach to support independent living for older adults through analysing the motion and shape of the human body. The proposed approach employs a new set of features to detect a fall. Motion information of a segmented silhouette when extracted can provide a useful cue for classifying different behaviours, while variation in shape and the projection histogram can be used to describe human body postures and subsequent fall events. The proposed approach presented here extracts motion information using best-fit approximated ellipse and bounding box around the human body, produces projection histograms and determines the head position over time, to generate 10 features to identify falls. These features are fed into a multilayer perceptron neural network for fall classification. Experimental results show the reliability of the proposed approach with a high fall detection rate of 99.60% and a low false alarm rate of 2.62% when tested with the UR Fall Detection dataset. Comparisons with state of the art fall detection techniques show the robustness of the proposed approach.

Highlights

  • Falls in the older adults are a relatively common occurrence that can have dramatic health consequences

  • The second step is to extract robust features which describe the change in human shape and allow discrimination of falls from other activities like lying and sitting. These are based on motion, change in the human shape, projection histogram and temporal change of head position

  • SYSTEM OVERVIEW This paper proposes a method for monitoring human activities in a home environment and detecting a fall event based on motion information, changes in shape orientation and position of human head and projection histograms

Read more

Summary

INTRODUCTION

Falls in the older adults are a relatively common occurrence that can have dramatic health consequences. Much work has been undertaken investigating the use of visual-based sensors for fall detection using single [10], multiple [11] and omni-directional [12] cameras. The second step is to extract robust features which describe the change in human shape and allow discrimination of falls from other activities like lying and sitting These are based on motion, change in the human shape, projection histogram and temporal change of head position. These features extracted from the human silhouette are fed into a multilayer perceptron (MLP) neural network for fall detection.

RELATED WORK
ENHANCED FEATURES FOR FALL DETECTION
QUANTIFY THE MOTION
APPROXIMATED ELLIPSE
PROJECTION HISTOGRAM
TEMPORAL CHANGES OF HEAD POSITION
FALLS CLASSIFICATION AND DETECTION BY NEURAL NETWORK
EXPERIMENTAL DATASET AND PROCESS
Findings
VIII. CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call