Abstract

The aim of the algorithm is to detect the abnormal actions that are more prone to elderly people in order to make them more independent and improve their quality of life. The framework is structured to construct a robust feature vector by computing R-transform and Zernike moments on average energy silhouette images (AESIs). The AESIs are generated by the integral sum of the segmented silhouettes obtained from the Microsoft's Kinect sensor v1. The proposed feature descriptor possesses scale-, translation-, and rotation-invariant properties that are also less sensitive to noise and minimizes data redundancy. It enhances the proposed algorithm's robustness and makes the classification process more efficient. The proposed work is validated on a novel abnormal human action (AbHA) dataset and three publically available 3D datasets-UR fall detection dataset, Kinect Activity Recognition dataset, and multiview NUCLA dataset. The proposed framework exhibits superior results from other state-of-the-art methods in terms of average recognition accuracy (ARA). The experimental results demonstrate 96.5%, 96.64%, 95.9%, and 86.4% ARA on the UR fall detection dataset, the KARD dataset, the AbHA dataset, and the multi-view NUCLA dataset, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.