Abstract

In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity.

Highlights

  • According to a survey from the National Institutes of Health (NIH), there are more than 1.6 million older U.S adults that suffer fall-relative injuries each year [1]

  • Grey or color images, this paper presents a robust fall detection method based on spatio-temporal context tracking by analyzing three-dimensional (3D) depth images captured by the Kinect sensor

  • In order to deal with the problem of projection occurring in fall detection with 2D grey or color images, this paper presents a robust fall detection method based on RGB-D images captured by a Kinect sensor

Read more

Summary

Introduction

According to a survey from the National Institutes of Health (NIH), there are more than 1.6 million older U.S adults that suffer fall-relative injuries each year [1]. Grey or color images, this paper presents a robust fall detection method based on spatio-temporal context tracking by analyzing three-dimensional (3D) depth images captured by the Kinect sensor. The dense spatio-temporal context (STC) algorithm is used to track the head position and the distance from the head to the floor plane is calculated in every following frame of the depth image. (2) the 3D position of head is tracked by using the dense spatio-temporal context learning method; and (3) the distance from the 3D position of head to the floor is calculated in every frame of the depth image.

Wearable Sensor-Based Methods
Ambient Sensor-Based Methods
Computer Vision-Based Methods
The Proposed Method
Foreground and Centroid Extraction
Head Position Extraction and Head Tracking
Floor Plane Extraction
B X 2 Y2
Fall Detection
B 2 C 2
Experimental Results and Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call