Abstract

This paper aims to address the problem of fall detection from RGB-D image sequences. Towards this goal, we propose a novel Key Point Trajectory Model which represents a fall action as a series of trajectory descriptors. In the proposed model, 16 key points including 14 skeleton points and 2 centers of body parts are extracted from each pair of RGB and depth images. Then a global trajectory descriptor is constructed on 16 trajectories that are obtained by connecting the key points across several frames in the RGB-D sequence. The trajectory descriptor incorporates the spatial, depth, and temporal context of key points and characterizes the global motion of human over a short period of time. A random forest is employed to learn the classifier of trajectory descriptors, and an integration rule is developed for detecting falls according to the classification results of all trajectory descriptors within a video. Experiments conducted on two fall detection datasets demonstrate that our method achieves better performance in comparison with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call