Abstract

Automatic fall detection approaches are essential for elderly people, particularly for those who live alone, because of the pressing need for immediate medical assistance. In this paper, we proposed a highly effective fall detection method based on a joint motion map using two parallel convolutional neural networks. Compared with the most commonly used joint trajectory method (JTM), our proposed method provided three major improvements. First, the three channels (R, G and B) of a pixel were creatively used to store relative motion information of a certain joint in 3D coordinates. Thus, the human action recognition problem was simplified as a multi-class problem after actions were encoded as images. Moreover, the input parameters were dramatically reduced because of concentration only on 25 joints of the human skeleton. Second, human motion information in each frame was encoded as an independent slice of a motion image, which avoids the information loss problem caused by action trajectory overlap. Third, under the guidance of a medical experiment, the limit of stability test (LOST), the start key frame and end key frame of a possible fall can be exactly estimated. Therefore, motion images can be generated with a fixed size. Our method was evaluated on two publicly available datasets: the Telecommunication Systems Team fall detection dataset V2 (TST v2) and UTKinect-Action3D Dataset (UT-A3D). The experimental results show that our method achieved an accuracy of 97.35% on TST v2 and performed excellently in the fall discrimination capability test on UT-A3D.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call