Abstract

Older people living alone are facing serious risks. Falls are the main risk that menace their lives. In this paper, a new vision-based method for fall detection is proposed to allow older people to live independently and safely. The proposed method uses shape deformation and motion information to distinguish between normal activity and fall. The main contribution of this paper consists on the proposition of a new descriptor based on silhouette deformation, as well as, a new image sequence representation is proposed to capture the change between different postures, which is discriminant information for action classification. Experimental results are conducted on two states-of-the art datasets (SDU fall and UR Fall dataset) and a comparative study is presented. The results obtained show the performance of the proposed method to differentiate between fall events and normal activity. The accuracy achieved is up to 98.41% with the SDU fall dataset and 95.45% with URFall dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call