Abstract

This study introduces an action descriptor that has the ability to perform human action recognition efficiently for one and two person(s). The authors’ proposed descriptor computes information like motion, spatial–temporal, diversion with respect to the centroid, critical point and keypoint detection, whereas the existing approaches lack to address this information efficiently. Action descriptors are developed from signature-based optical flow, signature-based corner points and binary robust invariant scalable keypoints. These action descriptors are applied to silhouette and silhouette's skeleton frames. These aforementioned action descriptors lead to developing the concatenated action descriptor (CAD). In order to develop action descriptors, the reference video frame plays an important role. Weizmann (one person) and both clean and noise versions of SBU Kinect Interaction (two persons) datasets are used for the evaluation of their proposed descriptors. On the other hand, classifications are performed by using support vector machine. Experimental results demonstrate that CAD not only outperforms among the entire proposed descriptors, but also provides better performance as compared to state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.